Can we start n8n main and workers in same machine

Describe the problem/error/question

Can we start n8n main and workers in same machine

We have hosted the n8n via Node JS. How to configure the n8n main and workers in the same machine

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

Hi @shakthipsg
Maybe this would get some help:

Below is the env data that we have configured in our system, and execution 'n8n start' and 'n8n worker' is separate console sessions.
    "NODES_EXCLUDE": "[]",
    "N8N_BLOCK_ENV_ACCESS_IN_NODE": "false",
    "N8N_MIGRATE_FS_STORAGE_PATH": "true",
    "N8N_ENDPOINT": "http://localhost:5678",
    "N8N_ENCRYPTION_KEY": "<key>",
    "N8N_LOG_LEVEL": "debug",
    "N8N_LOG_OUTPUT": "console",

    "EXECUTIONS_MODE": "queue",
    "OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS": "true",

    "QUEUE_BULL_REDIS_HOST": "localhost",
    "QUEUE_BULL_REDIS_PORT": "6379",

    "DB_TYPE": "postgresdb",
    "DB_POSTGRESDB_DATABASE": "n8n_db",
    "DB_POSTGRESDB_HOST": "localhost",
    "DB_POSTGRESDB_PORT": "5432",
    "DB_POSTGRESDB_USER": "n8n_user",
    "DB_POSTGRESDB_PASSWORD": "n8n",
    "DB_POSTGRESDB_SCHEMA": "n8n"

where the ‘n8n worker’ ends with
info n8n Task Broker’s port 5679 is already in use. Do you have another instance of n8n running already? { “file”: “task-broker-server.js” }

Updated the n8n version from 2.11.2 to 2.13.3.
With 2.13.3, we are able to launch 1 main n8n and 1 worker n8n instance.

But facing the same issue while trying to launch the 2nd worker instance

info n8n Task Broker’s port 5679 is already in use. Do you have another instance of n8n running already? { “file”: “task-broker-server.js” }

I think @Jekylls would be able to assist you better in this.

Thank you @Anshul_Namdev
Hi @Jekylls , Do you have any inputs on this?

2 Likes

Heya @shakthipsg - try checking your system to understand if a worker session is already running. ps -ef | grep n8n

For instance here I can see the PID (Process ID) for my Main N8N instance is tied to 3300

image

validate if you’re running a shell through shell managers such as screen or tmux

screen -ls

tmux ls

you can also validate if the port is already open (which will give you a clue but not share which application):

netstat -atlu | grep 5679

or

ss -atlu | grep 5679

If you can understand which PID (Process ID) the worker is attached to you can kill it but ensure you have saved any process beforehand as this will likely remove any data being processed.

sudo kill’PID Number‘

Let me know how this goes for you here on stand by

2 Likes

Yes @Jekylls , we have 1 worker running successfully. Not able to start more than 1 worker.

@Jekylls Can we start each worker with different Task Broker Port? Below is our main and worker specific configuration.

module.exports = {
apps: [
{
name: “n8n-main”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “”,
interpreter: “node”,
instances: 1, // Always keep main as 1
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5679”,
N8N_RUNNERS_AUTH_TOKEN: “test123”
}
},
{
name: “n8n-worker-01”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5679”,
N8N_RUNNERS_AUTH_TOKEN: “test123”
}
},
{
name: “n8n-worker-02”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5679”,
N8N_RUNNERS_AUTH_TOKEN: “test123”
}
}
]
}

Hello @shakthipsg You only need one task broker to coordinate many workers. Task Broker is different to workers. The Task Broker manages the work queue. The workers handle the workloads from the queue. Can you please confirm if there’s somthing else you’re attempting to acheive please.

I would provide you with a working version but am currently on a bus apologies :sweat_smile:

Thank you @Jekylls
Yes, we are trying to achieve the same. We are using Enterprise Edition of n8n for Production where we start 5 workers each with concurrecny limit of 50.

Common configuration for both Main and Worker instances:
DB_TYPE postgresdb
N8N_BLOCK_ENV_ACCESS_IN_NODE false
Path C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;C:\Users\Administrator\AppData\Roaming\npm
EXECUTIONS_MODE queue
OFFLOAD_MANUAL_EXECUTIONS_T… true
N8N_ENDPOINT http://localhost:5678
N8N_MIGRATE_FS_STORAGE_PATH true
DB_POSTGRESDB_USER n8n_user
QUEUE_BULL_REDIS_HOST localhost
N8N_LOG_LEVEL info
DB_POSTGRESDB_DATABASE n8n_db
N8N_ENCRYPTION_KEY *****
N8N_LOG_OUTPUT console
NODES_EXCLUDE
QUEUE_BULL_REDIS_PORT 6379
DB_POSTGRESDB_HOST localhost
DB_POSTGRESDB_PASSWORD n8n
DB_POSTGRESDB_PORT 5432
DB_POSTGRESDB_SCHEMA n8n

PM2 start configuration for Main and Worker instances:
module.exports = {
apps: [
{
name: “n8n-main”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “”,
interpreter: “node”,
instances: 1, // Always keep main as 1
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5679”
}
},
{
name: “n8n-worker-01”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5680”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5680”
}
},
{
name: “n8n-worker-02”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5681”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5681”
}
},
{
name: “n8n-worker-03”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork” , // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5682”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5682”
}
},
{
name: “n8n-worker-04”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5683”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5683”
}
},
{
name: “n8n-worker-05”,
script: “C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\bin\n8n”,
args: “worker --concurrency 50”,
interpreter: “node”,
instances: 1, // :backhand_index_pointing_left: Scale workers here
exec_mode: “fork”, // Important for n8n workers
env: {
N8N_RUNNERS_MODE: “internal”,
N8N_RUNNERS_ENABLED: “true”,
N8N_RUNNERS_BROKER_PORT: “5684”,
N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5684”
}
}
]
};

Good. Then you only need to declare Task Broker port/URI once. You only need one Task Broker to entire cluster. Make sense?

I will provide fixed version when I am home but you need to use same broker on every worker. Only create one broker and reference that one broker on every worker configuration.

Change all workers to use this :

N8N_RUNNERS_BROKER_PORT: “5679”

N8N_RUNNERS_TASK_BROKER_URI: “127.0.0.1:5679”

This was our initail configuration. But this starts only 1 worker.
All remaining workers stops with below message which is the actual issue posted

”””
info n8n Task Broker’s port 5679 is already in use. Do you have another instance of n8n running already? { “file”: “task-broker-server.js” }
”””

Are you sure you’re not running more than 1 N8N Main instance? Please share output of

ps -ef | grep n8n

10824 node --disallow-code-generation-from-strings --disable-proto=delete C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\node_modules@n8n\task-runner\dist\start.js

1 Like

run ‘pm2 stop all’ or ‘pm2 delete n8n’ and try starting main again. Let me know the outcome.

1 Main and 1 Worker instances which came Online on the 1st attempt remains same. Other Worker instances are getting re-started, turned Online but again back to Stopped.