Hi, I read queue document, but I 'm not clear at something:
Could I run worker in the same docker with main n8n instance?
Is worker docker variable like the same as n8n instance, when I created a worker instance?
When I run a worker with command n8n worker, one by one new worker created, doesn’t it? Does worker have different name or ID?
No, they have to be different instances. So for the absolute minimum configuration, you require three running docker containers: 1x n8n-main-instance, 1x n8n-worker-instance, 1x Redis-instance
Yes, that is also possible. You just have to make sure that they are all in the same network and reachable via the hostname that gets set (like for example “redis” in the example above).
Hi, I built queue and worker successfully
But I have more some questions. Could you take more time for me?
1. Could I assign any worker for any workflow in main-n8n?
Just some workflows which I built takes more executions at the same time. Could I be able to assign some workers for only some workflows take more resources?
Hey @Pooja, this thread is already marked as solved, so it probably won’t get much attention. It’s better to open a new topic if you have additional questions. That said, perhaps @krynble knows the answer to this?
Unfortunately no, currently there is no way to specify what workers run what sort of workflows. Once a job enters the queue, any of the available workers picks it up and processes it.
The way n8n works is according to the second example, where each job finished frees up a slot for another one to start
Not really, when using queue mode main or own only affect manual executions. Every “production” execution (started by a trigger) will run always in the same process as the worker (effectively acting like main)
You cannot nest workers and the way you could divide your deployment is by having separate redis and database instances, effectively having nearly 2 identical installations side by side.
Currently no, the only way to specify concurrency is via the start command, which can be override with the entrypoint from docker or as you mentioned, by building another image
Webhook processes are responsible for handing incoming HTTP requests that are related to workflow executions. So every request that comes to n8n and should trigger a new workflow execution can be intercepted by those instances. This allows you to scale the traffic n8n can handle by adding multiple Webhook processes. You still need workers to process the execution.