Task request timed out after 60 seconds

Describe the problem/error/question

i cant excute any code node ! ( a simple display message)

Task request timed out after 60 seconds

Your Code node task was not matched to a runner within the timeout period. This indicates that the task runner is currently down, or not ready, or at capacity, so it cannot service your task.

If you are repeatedly executing Code nodes with long-running tasks across your instance, please space them apart to give the runner time to catch up. If this does not describe your use case, please open a GitHub issue or reach out to support.

If needed, you can increase the timeout using the N8N_RUNNERS_TASK_REQUEST_TIMEOUT environment variable.

What is the error message (if any)?

n8n version

2.1.5 (Self Hosted)

Stack trace

Error: Task request timed out after 60 seconds at LocalTaskRequester.requestExpired (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:304:17) at LocalTaskRequester.onMessage (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:272:10) at TaskBroker.handleRequestTimeout (/usr/local/lib/

Please share your workflow

Share the output returned by the last node

n8n version

2.1.5 (Self Hosted)

Stack trace

Error: Task request timed out after 60 seconds at LocalTaskRequester.requestExpired (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:304:17) at LocalTaskRequester.onMessage (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:272:10) at TaskBroker.handleRequestTimeout (/usr/local/lib/Information on your n8n setup

  • n8n version: 2.1.5 (Self Hosted)
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: ubuntu 24.04

services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:

  • “5678:5678”
    environment:
  • N8N_HOST=n8n.example.com
  • N8N_PORT=5678
  • N8N_PROTOCOL=https
  • WEBHOOK_URL=https://n8n.example.com
  • N8N_EDITOR_BASE_URL=https://n8n.example.com
  • GENERIC_TIMEZONE=UTC
  • N8N_USER_FOLDER=/home/node/.n8n
    volumes:
  • ./n8n-data:/home/node/.n8n
  • ./local-files:/files
    networks:
  • n8n-network

networks:
n8n-network:
driver: bridge

Hey @Nac !

In v2 there were “Breaking Changes”.

It means that Js code can run using internal or external task runner.

Python uses only external task runners.

Please set this env vars a well to use simple JS.

  • N8N_RUNNERS_ENABLED=true

  • N8N_RUNNERS_MODE=internal

Cheers!

thanks!

i want to use python not only js. if i understand you i should use a separet container for runners

how can i adapt this docker compose file to use a new version of n8n v2.x ?

services:
n8n:
image: n8nio/n8n:1.111.0
container_name: n8n-main
environment:

  • N8N_RUNNERS_ENABLED=true
  • N8N_RUNNERS_MODE=external
  • N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
  • N8N_RUNNERS_AUTH_TOKEN=your-secret-here
  • N8N_NATIVE_PYTHON_RUNNER=true
    ports:
  • “5678:5678”
    volumes:
  • n8n_data:/home/node/.n8n

etc.

task-runners:
image: n8nio/runners:1.111.0
container_name: n8n-runners
environment:

etc.

depends_on:

  • n8n

volumes:
n8n_data:

Hi @Nac,

Try the below docker compose which sets up a docker compose instance on n8n running in queue mode with runners enabled as external supporting both JS and Python.

services:
  n8n-db:
    image: postgres:16.1
    restart: always
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_PASSWORD=n8n
      - POSTGRES_USER=n8n
    volumes:
      - postgres-data:/var/lib/postgresql/data

  n8n-redis:
    image: redis:7-alpine
    restart: always
    volumes:
      - redis-data:/data

  n8n-main:
    image: n8nio/n8n
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    volumes:
      - n8n-data:/home/node/.n8n
    ports:
      - 4567:5678
    environment:
      - WEBHOOK_URL=http://localhost:5678
      - NODE_ENV=production
      - N8N_HOST=localhost
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - N8N_SECURE_COOKIE=true
      - EXECUTIONS_MODE=queue
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      - N8N_SKIP_AUTH_ON_OAUTH_CALLBACK=false
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      # Binary data configuration (filesystem mode for regular mode)
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0

  n8n-worker:
    image: n8nio/n8n
    restart: always
    command: worker --concurrency=6
    depends_on:
      - n8n-db
      - n8n-redis
      - n8n-worker-task-runner
    volumes:
      - n8n-data:/home/node/.n8n
    environment:
      - EXECUTIONS_MODE=queue
      - WEBHOOK_URL=http://localhost:5678
      - N8N_HOST=localhost
      - N8N_SKIP_DB_INIT=true
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      - N8N_PROCESS=worker
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0

  # Task runner for n8n-worker with Python support for v2
  n8n-worker-task-runner:
    image: n8nio/runners
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    environment:
      # Task runner configuration
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker:5679
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      # Enable Python and JavaScript support
      - N8N_RUNNERS_ENABLED_TASK_TYPES=javascript,python
      # Auto shutdown after 15 seconds of inactivity
      - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
    volumes:
      # Shared volume for file access if needed
      - n8n-data:/home/node/.n8n

volumes:
  postgres-data:
  redis-data:
  n8n-data:

1 Like

You can then scale your workers and runners like this:

docker compose up -d --scale n8n-worker=2 --scale n8n-worker-task-runner=2

Also make sure to read this VERY IMPORTANT notice about the changes with python in n8n v2. If you have existing python code nodes, you will need to make some changes

1 Like

Hi,

thanks for replying

i did exactly the same docker compose file i juste change localhost by my domaine name but still the same error ! the issue is not resolved !

:~/n8n$ docker logs -f n8n

ValidationError: The ‘X-Forwarded-For’ header is set but the Express ‘trust proxy’ setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See Page Redirection for more information.
at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:187:13)
at Object.wrappedValidations. [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:398:22)
at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:671:20)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:724:32
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:704:5 {
code: ‘ERR_ERL_UNEXPECTED_X_FORWARDED_FOR’,
help: ‘https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/’
}
Slow database query
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403
timeout of 3000ms exceeded
Error while fetching community nodes: timeout of 3000ms exceeded
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403
Task request timed out
Error: Task request timed out
at ErrorReporter.wrap (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_ec37920eb95917b28efaa783206b20f3/node_modules/n8n-core/src/errors/error-reporter.ts:242:37)
at ErrorReporter.error (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_ec37920eb95917b28efaa783206b20f3/node_modules/n8n-core/src/errors/error-reporter.ts:228:25)
at LocalTaskRequester.requestExpired (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:309:22)
at LocalTaskRequester.onMessage (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:272:10)
at TaskBroker.handleRequestTimeout (/usr/local/lib/node_modules/n8n/src/task-runners/task-broker/task-broker.service.ts:115:50)
at Timeout. (/usr/local/lib/node_modules/n8n/src/task-runners/task-broker/task-broker.service.ts:102:9)
at listOnTimeout (node:internal/timers:588:17)
at processTimers (node:internal/timers:523:7)

Task request timed out after 60 seconds
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403
Task runner connection attempt failed with status code 403

Can you first run my docker compose by itself without changing anything in it and confirm whether it works with the localhost settings as is. Stop any other n8n instances you have running to isolate the test. The 403 suggests that you have something else going on in the back which is causing the problem. Let’s first isolate the environment we’re testing

i m using this tuto https://docs.vultr.com/how-to-install-n8n-on-ubuntu-2404 for the installation so you want me to do same config and just put your docker compse file instead of the docker file of this tuto ?

Yes, the more you mix configs on your side, the more moving parts there are, the less likely we are able to help you debug the problem. Remember we dont have a view into what you see, so the above tutorial with nginx is telling there is something else which could be causing this to break. I wanted you to run my compose file as is. Make sure to expose your port for now to test. Ignore the dns domain setup and anything additional for now. We can deal with that after the fact. I want us to first confirm whether you can host an instance with Python runner working. That is priority right now

i have juste ssh access ! i cant browse localhost to test node code !

image

Ignore the localhost for now. The way to access your n8n instance is by using the public IP your host provided you and the port in a browser.

Same problem !

Editor is now accessible via:

https://n8nv2.test.com

Owner was set up successfully
ValidationError: The ‘X-Forwarded-For’ header is set but the Express ‘trust proxy’ setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See for more information.
at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:187:13)
at Object.wrappedValidations. [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:398:22)
at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:671:20)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:724:32
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:704:5 {
code: ‘ERR_ERL_UNEXPECTED_X_FORWARDED_FOR’,
help: ‘https:/express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/’
}
ValidationError: The ‘X-Forwarded-For’ header is set but the Express ‘trust proxy’ setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See for more information.
at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:187:13)
at Object.wrappedValidations. [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:398:22)
at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:671:20)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:724:32
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:704:5 {
code: ‘ERR_ERL_UNEXPECTED_X_FORWARDED_FOR’,
help: ‘https:/express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/’
}
(node:8) [DEP0060] DeprecationWarning: The util._extend API is deprecated. Please use Object.assign() instead.
(Use node --trace-deprecation ... to show where the warning was created)
ValidationError: The ‘X-Forwarded-For’ header is set but the Express ‘trust proxy’ setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See for more information.
at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:187:13)
at Object.wrappedValidations. [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:398:22)
at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:671:20)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:724:32
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:704:5 {
code: ‘ERR_ERL_UNEXPECTED_X_FORWARDED_FOR’,
help: ‘https:/express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/’
}
ValidationError: The ‘X-Forwarded-For’ header is set but the Express ‘trust proxy’ setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See for more information.
at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:187:13)
at Object.wrappedValidations. [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:398:22)
at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:671:20)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:724:32
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:704:5 {
code: ‘ERR_ERL_UNEXPECTED_X_FORWARDED_FOR’,
help: ‘https:/express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/’
}
User survey updated successfully
User attempted to access a workflow without permissions
User attempted to access a workflow without permissions
timeout of 3000ms exceeded
Error while fetching community nodes: timeout of 3000ms exceeded
Enqueued execution 1 (job 1)
Blocked GET /robots.txt for “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36; compatible; OAI-SearchBot/1.3; robots.txt; +https://openai.com/searchbot”
Slow database query
Enqueued execution 2 (job 2)

Please share your docker compose file and edit out any sensitive information such as secrets.

The mention of “The ‘X-Forwarded-For’ header” tells me you’re still going through nginx somewhere. Are you running this locally on your PC and not the server?

no i m on ubuntu server not on local and i give a public ip to the server

and this my nginx config file

# ===============================
# HTTP → HTTPS redirection
# ===============================
server {
    listen 80;
    server_name n8nv2.****.com;

    return 301 https://$host$request_uri;
}

# ===============================
# HTTPS (SSL) → n8n
# ===============================
server {
    listen 443 ssl http2;
    server_name n8nv2.****.com;

    # Certificats Let's Encrypt
    ssl_certificate     /etc/letsencrypt/live/n8nv2.****.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/n8nv2.****.com/privkey.pem;

    # Sécurité SSL minimale (stable)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://127.0.0.1:5678;

        # Headers proxy (obligatoires pour n8n)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;

        # WebSocket + workflows longs
        proxy_buffering off;
        proxy_read_timeout 300s;
        proxy_connect_timeout 75s;
    }
}

services:
  n8n-db:
    image: postgres:16.1
    restart: always
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_PASSWORD=n8n
      - POSTGRES_USER=n8n
    volumes:
      - postgres-data:/var/lib/postgresql/data

  n8n-redis:
    image: redis:7-alpine
    restart: always
    volumes:
      - redis-data:/data

  n8n-main:
    image: n8nio/n8n
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    volumes:
      - n8n-data:/home/node/.n8n
    ports:
      - 5678:5678
    environment:
      - WEBHOOK_URL=https://n8nv2.****.com
      - N8N_HOST=n8nv2.****.com
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - N8N_SECURE_COOKIE=true
      - EXECUTIONS_MODE=queue
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=*****
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      - N8N_SKIP_AUTH_ON_OAUTH_CALLBACK=false
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      # Binary data configuration (filesystem mode for regular mode)
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0

  n8n-worker:
    image: n8nio/n8n
    restart: always
    command: worker --concurrency=6
    depends_on:
      - n8n-db
      - n8n-redis
      - n8n-worker-task-runner
    volumes:
      - n8n-data:/home/node/.n8n
    environment:
      - EXECUTIONS_MODE=queue
      - WEBHOOK_URL=https://n8nv2.****.com
      - N8N_HOST=n8nv2.****.com
      - N8N_SKIP_DB_INIT=true
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=*****
      - N8N_PROCESS=worker
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0

  # Task runner for n8n-worker with Python support for v2
  n8n-worker-task-runner:
    image: n8nio/runners
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    environment:
      # Task runner configuration
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker:5679
      - N8N_RUNNERS_AUTH_TOKEN=*****
      # Enable Python and JavaScript support
      - N8N_RUNNERS_ENABLED_TASK_TYPES=javascript,python
      # Auto shutdown after 15 seconds of inactivity
      - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
    volumes:
      # Shared volume for file access if needed
      - n8n-data:/home/node/.n8n

volumes:
  postgres-data:
  redis-data:
  n8n-data:

Same issue for me - just running locally.
Trying with a minimal reproducible example: single n8n container, single runner container, no redis no postgres.

Set up was working pyodide and python native with v1.

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n-main
    volumes:
      - ./n8n_data:/home/node/.n8n
    ports:
      - "5678:5678"
    environment:
        # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secret-here

  task-runners:
    image: n8nio/runners:latest
    container_name: n8n-runners
    environment:
      # Task runner configuration
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker:5679
      - N8N_RUNNERS_AUTH_TOKEN=your-secret-here
      # Enable Python and JavaScript support
      - N8N_RUNNERS_ENABLED_TASK_TYPES=javascript,python
      # Auto shutdown after 15 seconds of inactivity
      - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
    depends_on:
      - n8n

Hi @sac.mm.xlv ,

If you’re not using queue mode which requires worker instances, then make sure you update your runner’s broker env var to point to ‘n8n-main‘ and not worker

See below for correct config

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n-main
    volumes:
      - ./n8n_data:/home/node/.n8n
    ports:
      - "5678:5678"
    environment:
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secret-here

  task-runners:
    image: n8nio/runners:latest
    container_name: n8n-runners
    environment:
      # Task runner configuration
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-main:5679
      - N8N_RUNNERS_AUTH_TOKEN=your-secret-here
      # Enable Python and JavaScript support
      - N8N_RUNNERS_ENABLED_TASK_TYPES=javascript,python
      # Auto shutdown after 15 seconds of inactivity
      - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
    depends_on:
      - n8n