Task request timed out after 60 seconds — External runner + Queue mode on Kubernetes (v2.9.4)
n8n version: 2.9.4
Deployment: Self-hosted on Kubernetes
Mode: Queue mode with Redis
Database: PostgreSQL
Runner image: Custom build on top of n8nio/runners:2.9.4
The error
Every Code node execution — both JavaScript and Python — fails with:
Task request timed out after 60 seconds. Your Code node task was not matched to a runner within the timeout period. This indicates that the task runner is currently down, or not ready, or at capacity.
This happens in both production (queue mode via workers) and manual test executions.
Our setup
We have 4 services running:
n8n-main— main instancen8n-worker— queue mode workern8n-main-task-runner— n8nio/runners sidecar for mainn8n-worker-runner— n8nio/runners sidecar for worker
n8n-main and n8n-worker both have:
N8N_RUNNERS_ENABLED=true
N8N_RUNNERS_MODE=external
N8N_RUNNERS_AUTH_TOKEN=<shared-secret>
N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
N8N_NATIVE_PYTHON_RUNNER=true
n8n-worker also has:
N8N_PROCESS=worker
(We are not sure if this is the correct way to start a worker — is command: ["n8n", "worker"] in the K8s spec required instead?)
n8n-worker-runner has:
N8N_RUNNERS_AUTH_TOKEN=<shared-secret>
N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker.namespace.svc.cluster.local:5679
N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
Our custom n8n-task-runners.json (mounted at /etc/n8n-task-runners.json):
{
"task-runners": [
{
"runner-type": "javascript",
"workdir": "/opt/runners/task-runner-javascript",
"health-check-server-port": "5681",
"env-overrides": {
"NODE_FUNCTION_ALLOW_BUILTIN": "crypto",
"NODE_FUNCTION_ALLOW_EXTERNAL": "moment,uuid"
}
},
{
"runner-type": "python",
"workdir": "/opt/runners/task-runner-python",
"health-check-server-port": "5682",
"env-overrides": {
"PYTHONPATH": "/opt/runners/task-runner-python",
"N8N_RUNNERS_STDLIB_ALLOW": "json",
"N8N_RUNNERS_EXTERNAL_ALLOW": "numpy,pandas"
}
}
]
}
Our custom Dockerfile for the runner image:
FROM n8nio/runners:2.9.4
USER root
RUN cd /opt/runners/task-runner-javascript && pnpm add moment uuid
RUN cd /opt/runners/task-runner-python && uv pip install numpy pandas
COPY n8n-task-runners.json /etc/n8n-task-runners.json
USER runner
What we’ve already tried
- Verified auth tokens match across all 4 services
- Verified encryption keys match between main and worker
- Confirmed
N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0is set on both n8n instances - Confirmed the worker-runner’s
N8N_RUNNERS_TASK_BROKER_URIpoints to the worker, not main - Checked network connectivity between runner sidecar and n8n broker port 5679
Specific questions
Q1. Our n8n-task-runners.json does not have command or args fields. We believe this may be the primary cause — the launcher connects to the broker but has no process to spawn. Can anyone confirm what the correct command and args values are for n8nio/runners:2.9.4 for both runners?
Q2. Is N8N_PROCESS=worker a valid env var for starting a worker in v2.x? The docs only show command: n8n worker in Docker Compose examples. Does this translate to command: ["n8n", "worker"] in a Kubernetes deployment spec?
Q3. We noticed GitHub issue #25468 about the -I Python flag causing ModuleNotFoundError: No module named 'src'. Does n8nio/runners:2.9.4 still have this issue, or has it been patched?
Q4. Are NODE_FUNCTION_ALLOW_BUILTIN, NODE_FUNCTION_ALLOW_EXTERNAL, and N8N_RUNNERS_STDLIB_ALLOW effective as container-level environment variables on the n8nio/runners container? Or must they always go inside env-overrides in the JSON config? We currently have them in both places and are unsure which takes precedence.
Q5. OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true is set on main. Does this mean the n8n-main-task-runner sidecar is completely unnecessary and can be removed?
Happy to share full container logs from any of the 4 services. Any help greatly appreciated — this has been blocking production for some time.
Thanks ![]()