I have some question about worker and queue

Hi, I read queue document, but I 'm not clear at something:

  1. Could I run worker in the same docker with main n8n instance?
    Is worker docker variable like the same as n8n instance, when I created a worker instance?

  2. When I run a worker with command n8n worker, one by one new worker created, doesn’t it? Does worker have different name or ID?

  3. How could I remove a worker from list?

To your questions:

  1. No, they have to be different instances. So for the absolute minimum configuration, you require three running docker containers: 1x n8n-main-instance, 1x n8n-worker-instance, 1x Redis-instance
  2. Yes, guess they all get some unique identifier
  3. You stop the worker

n8n uses bull. So if you wonder about the inner workings you can find more information here:
https://github.com/OptimalBits/bull/

3 Likes

So, I need to set 3 dockers below:

But I do not find way to connect worker docker with redis or main docker
Do I miss something here?

  1. Main docker with variables:
    N8N_ENCRYPTION_KEY
    QUEUE_BULL_REDIS_HOST
    QUEUE_BULL_REDIS_PORT
    QUEUE_BULL_REDIS_PASSWORD

    DB_MYSQLDB_DATABASE
    DB_MYSQLDB_HOST
    DB_MYSQLDB_PASSWORD
    DB_MYSQLDB_PORT
    DB_MYSQLDB_USER
    DB_TYPE
    N8N_BASIC_AUTH_ACTIVE
    N8N_BASIC_AUTH_PASSWORD
    N8N_BASIC_AUTH_USER
    N8N_HOST
    N8N_PROTOCOL
    VUE_APP_URL_BASE_API
    WEBHOOK_TUNNEL_URL

  2. Redis docker with variables:

    docker run --name some-redis -p 6379:6379  -d redis
    
  3. Worker docker with variables:

    docker run --name worker01 -p 5679:5678 n8nio/n8n n8n worker
    

    N8N_ENCRYPTION_KEY
    EXECUTIONS_MODE

You can find an example in the original PR:

  1. Start redis:
docker run --name redis -it --rm redis
  1. Start n8n main process
docker run -it --rm --name n8n-main -p 5678:5678 -e EXECUTIONS_MODE=queue -e QUEUE_BULL_REDIS_HOST=redis --link redis:redis -v ~/.n8n:/home/node/.n8n n8nio/n8n n8n start --tunnel
  1. Start as many of the worker processes as you like:
docker run -it --rm -e QUEUE_BULL_REDIS_HOST=redis --link redis:redis -v ~/.n8n:/home/node/.n8n n8nio/n8n n8n worker
2 Likes

Thanks, it is helpful

I read that docker not recommend to use --link
Could I set up it by a --network?

Yes, that is also possible. You just have to make sure that they are all in the same network and reachable via the hostname that gets set (like for example “redis” in the example above).

Hi, I built queue and worker successfully
But I have more some questions. Could you take more time for me?

1. Could I assign any worker for any workflow in main-n8n?

Just some workflows which I built takes more executions at the same time. Could I be able to assign some workers for only some workflows take more resources?

2. How does worker do in queue?

It will do like this (with every group 3)

Task in queue 3 concurrency working Done
7, 6, 5, 4 3, 2, 1
7, 6, 5, 4 3, 2 1
7, 6, 5, 4 3 2, 1
7, 6, 5, 4 ready! 3, 2, 1
7 6, 5, 4 3, 2, 1

Or like this (with a queue keep in 3)

Task in queue 3 concurrency working Done
7, 6, 5, 4 3, 2, 1
7, 6, 5 4, 3, 2 1
7, 6 5, 4, 3 2, 1
7 6, 5, 4 3, 2, 1

3. Could I use worker with EXECUTIONS_PROCESS

This perhaps help me optimize how a worker do with process, own or main

4. Could I nest workers?

Try to nest from a main-n8n
Main → worker 01 (in network A)
worker 01 → sub-worker 02 (in network B)
worker 01 → sub-worker 03 (in network B)

Use a worker for multi main-n8n
Main-01 → worker 01 (in network A)
Main-02 → worker 01 (in network B)

5. Could I have more option for concurrency?

// This run when build worker docker
n8n worker --concurrency=5

Could I use a variable for main-docker or worker-docker to control concurrency

6. I read about “webhook processor”, but do not understand clearly. Could you give me an “example of usage” or a “usage model”?

1 Like

can anyone please answer this ?

Hey @Pooja, this thread is already marked as solved, so it probably won’t get much attention. It’s better to open a new topic if you have additional questions. That said, perhaps @krynble knows the answer to this?

hey @cmdntd987 sorry for the delay.

About your questions:

  1. Unfortunately no, currently there is no way to specify what workers run what sort of workflows. Once a job enters the queue, any of the available workers picks it up and processes it.

  2. The way n8n works is according to the second example, where each job finished frees up a slot for another one to start

  3. Not really, when using queue mode main or own only affect manual executions. Every “production” execution (started by a trigger) will run always in the same process as the worker (effectively acting like main)

  4. You cannot nest workers and the way you could divide your deployment is by having separate redis and database instances, effectively having nearly 2 identical installations side by side.

  5. Currently no, the only way to specify concurrency is via the start command, which can be override with the entrypoint from docker or as you mentioned, by building another image

  6. Webhook processes are responsible for handing incoming HTTP requests that are related to workflow executions. So every request that comes to n8n and should trigger a new workflow execution can be intercepted by those instances. This allows you to scale the traffic n8n can handle by adding multiple Webhook processes. You still need workers to process the execution.

I hope this clarifies your questions :slight_smile:

6 Likes