Task runners in GCP

Describe the problem/error/question

The entire infrastructure resides in my private subnet (there is peering between my VPC and Google’s servicenetworking VPC). The only access point from the internet is a public application load balancer deployed in a proxy-only subnet.
So I have a Cloud Run service for the main instance, a Cloud Run service for the worker part, 1 Postgres DB on Cloud SQL and 1 Redis on MemoryStore for Redis.

After a bit of tinkering, everything seems to be working. I also need to introduce the task runner part. Nothing could be easier, I thought. It will be one new Cloud Run service and a few more environment variables on the main service.

Unfortunately, as soon as I introduce the new environment variables, the main instance starts responding only with 404s. Can you tell me what I’m doing wrong?


These are the environment variables on the main service


These are the environment variables on the task runner service (N8N_RUNNERS_TASK_BROKER_URI value is the public record of my load balancer)

The error I get in the logs after I set the environment variables.


As you can see, despite the error I receive, the environment variable is set. What am I doing wrong?

Information on your n8n setup

  • n8n version: 2.4.6
  • Database : Postgres (Cloud SQL)
  • Redis : 7.2 (Memorystore for Redis)
  • Running n8n via GCP Cloud Run

Hi @TheBlackOrion

Welcome to the n8n community! :tada:

You’re not doing anything wrong with the infrastructure. The problem is exclusively a mandatory configuration issue for Task Runners in external mode.

When you set:

N8N_RUNNERS_ENABLED=true
N8N_RUNNERS_MODE=external

n8n requires that:

N8N_RUNNERS_AUTH_TOKEN
  • Be defined in the main instance
  • Be defined in all runners
  • Have exactly the same value (byte by byte)

If this doesn’t happen, n8n fails during bootstrap, starts the HTTP server without registering routes, and the external result is 404 on everything.

Resolution documented
  1. Generate a strong token:

bash

openssl rand -hex 32
  1. Use the same value in:
# main instance
N8N_RUNNERS_AUTH_TOKEN=<TOKEN>

# runner
N8N_RUNNERS_AUTH_TOKEN=<TOKEN>
  1. Do a complete redeploy of both main and runner.

After this, n8n will initialize correctly and routes will stop returning 404.

I’m pretty sure I’ve done all the necessary tests, but setting N8N_RUNNERS_ENABLED=true
N8N_RUNNERS_MODE=external seems to have solved something. N8N_RUNNERS_AUTH_TOKEN is retrieved from a secret for both cloud run services, both the main and the task runner, so I’m sure they have the same value.

I no longer get a 404 error, but I think there is still a timeout or communication problem.


After solving the various issues and cleaning up my templates, I will be delighted to share the TF code here.

@TheBlackOrion

Thanks for confirming :+1:

If enabling
N8N_RUNNERS_ENABLED=true and
N8N_RUNNERS_MODE=external
resolved the 404s, then this confirms it was a bootstrap issue caused by an incomplete Task Runner configuration, not an infrastructure or Cloud Run problem.

With all required runner variables set (including the shared N8N_RUNNERS_AUTH_TOKEN), n8n is able to fully initialize and register its routes correctly.

Glad to hear it’s working now!
If this solution solved the issue for you, please consider leaving a like or marking the reply as the solution ( it helps others find the answer more easily and also supports community contributors.)

n8n version

2.4.6 (Self Hosted)

Stack trace

Error: Task request timed out after 60 seconds at LocalTaskRequester.requestExpired (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:304:17) at LocalTaskRequester.onMessage (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:272:10) at TaskBroker.handleRequestTimeout (/usr/local/lib/node_modules/n8n/src/task-runners/task-broker/task-broker.service.ts:115:50) at Timeout.<anonymous> (/usr/local/lib/node_modules/n8n/src/task-runners/task-broker/task-broker.service.ts:102:9) at listOnTimeout (node:internal/timers:588:17) at processTimers (node:internal/timers:523:7)

I have to check the firewall rules or something regarding the environment variables?
The value of N8N_RUNNERS_TASK_BROKER_URI is the custom domain of my application load balancer, used to reach the main n8n instance from Internet.

N8N_BLOCKING_TASK_TIMEOUT = 300 does not solve the problem, so the timeout is due to networking problems I suppose.

@TheBlackOrion

Yes, it’s definitely worth checking both: your network/firewall rules and the environment variables related to your runners.

That “Task request timed out after 60 seconds” error almost always means your runner can’t establish communication with the task broker in your n8n instance.

Here's what I'd recommend checking:

1. Double-check your N8N_RUNNERS_TASK_BROKER_URI This is usually the culprit! Make sure it’s pointing to the internal service address of your n8n instance (or worker), not your public domain or load balancer. It should look something like http://n8n:5679 or http://n8n-worker:5679 - basically whatever your service/container name is internally.

2. Verify your broker is listening properly On your n8n/worker side, you’ll want N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0 so it accepts connections from other containers or services.

3. Network and firewall configuration Your runners and n8n need to be able to talk to each other, which means:

  • They should be on the same internal network (same VPC/subnet, or same Docker network)
  • Port 5679 needs to be open between them - check your security groups, firewall rules, etc.

I’ve seen this exact scenario with ECS and Docker setups where runners just keep saying “Waiting for task broker to be ready…” and n8n times out. Usually it’s either the URI pointing to the wrong place or the network not being configured to allow that traffic through.

4. Consider adjusting the timeout (if everything else looks good) If your setup is correct but runners are just taking a bit longer to spin up, you could try increasing N8N_RUNNERS_TASK_REQUEST_TIMEOUT from the default 20 seconds to give them more breathing room.

Let me know what you find! Happy to help troubleshoot further if you’re still running into issues after checking these.

Is it possible that Cloud Run, with or without an application load balancer, only exposes port 5678 and therefore there is no way to reach the broker listening inside the main container on port 5679? I am trying to deploy an nginx as a sidecar container, but I wonder if it makes sense.