N8N Main Instance/Queue Instance Setup?

Hey all,

I am looking to set up N8N with Queue’s on different servers (for their own Worker/Runner instance).

Should look something like below:


                   ┌───────────────────────────────┐
                   │        io.domain.com          │
                   │          (Server 1)           │
                   │                               │
                   │          ┌─────────┐          │
                   │          │  Main   │          │
                   │          └────┬────┘          │
                   │               │               │
                   │               ▼               │
                   │             Redis             │
                   └───────────────┬───────────────┘
                                   │
                                   ▼


┌────────────────────┐   ┌────────────────────┐   ┌────────────────────┐
│   w1.domain.com    │   │   w2.domain.com    │   │   w3.domain.com    │
│     (Server 2)     │   │     (Server 3)     │   │     (Server 4)     │
│                    │   │                    │   │                    │
│    n8n-worker-1    │   │    n8n-worker-2    │   │    n8n-worker-3    │
│         │          │   │         │          │   │         │          │
│         ▼          │   │         ▼          │   │         ▼          │
│    n8n-runner-1    │   │    n8n-runner-2    │   │    n8n-runner-3    │
│  (Code execution)  │   │  (Code execution)  │   │  (Code execution)  │
└────────────────────┘   └────────────────────┘   └────────────────────┘

Just a couple of questions:

  • Should I have one worker on main server or keep it clear?
  • What is the acceptable min specs for something like this? I was hoping something like 1vCPU, 1GB ram on DO?
  • Are there any particularly good docs available to do something like this?
  • what env var need to be excluded from n8n queues (obviously webhooks/etc)

Thanks

Your architecture looks solid but 1vCPU/1GB is gonna be rough, even worker-only nodes will want at least 2GB since each worker process eats 200-500MB and you need a task runner sidecar per worker too. For the env vars, workers just need EXECUTIONS_MODE=queue, Redis/Postgres connection vars, and the same N8N_ENCRYPTION_KEY, skip WEBHOOK_URL and editor stuff since the main instance handles all that. The docs at Configuring queue mode | n8n Docs cover the full setup pretty well, and yeah I’d keep the main server clear of workers if you’ve got 3 dedicated worker boxes already.

thanks :slight_smile:

I wasn’t sure with a striped UI and webhook instance of the worker what the server would require but that makes sense that 2gb would do it.

I wonder would it be better to reduce co-currency of the workers and have more lower powered workers that have higher cocurrency and less workers (but more powerful per worker). traffic can be sporadic and it is easier/cost effective to spin up/down $6 a month instance of N8N (with a bash script).

Do you happen to know if there is some internal reporting of how long jobs are taking to clear from redis queue? I guess that would be the trigger for more/less workers, right?

@Chris_Bradley You can also use a plain main instance in queue mode, without any webhooks or workers, and run multiple runner and workflow with nodes with official queue mode with Prometheus/metrics setup to monitor job wait times and tune resources instead of relying on 1vCPU/1GB boxes

do you mean on the same VPS?

I could definitely do that, I guess it is not super scalable though right? as you are still limited by the hardware of the VPS? where as if you have a bash deployment script, you could spin up/down an exttra worker at ease without adjusting the main instance of N8N (which holds all the flows/creds/etc)…

hook the new instance up to redis queue and you are up and running.

Yes, although if you have a good hardware that idea would really work, again all comes down to hardware and tunneling service you are using to expose that local instance, else everything really would fall into as expected.

Yeah the many-cheap-workers approach actually works well with queue mode, that’s kind of the whole point of it. You can set N8N_CONCURRENCY_PRODUCTION_LIMIT to something low like 5 on each worker so they don’t get overwhelmed, and since Redis handles the job distribution you can spin instances up and down without touching the main server at all. Your instinct about the bash deploy script is right, that’s way more flexible than trying to vertically scale one box.

I get it, I guess it comes down to how much you want to spend on the main instance.

Its quiet likely you can set this up with a main ongoing at say like 2GB ram/2vCPU, then push into new instances of the same. if you have less processing requirement you can turn off VPS’s.

1 Like

I would say that it depends on your usage and use case , if your use case is going to be higher and a lot intensive upgrading your hardware to at least 8GB of ram would be a better take even in multiple small instances or projects, but using a VPS like hostinger would also work fine with their KVM2 plans

Thanks, yeah it looks like that is the design of it TBH.

this is what running is looking like right now.

NAME                 CPU %     MEM USAGE
n8n-runners-1-1      0.02%     3.969MiB
n8n-n8n-worker-1-1   0.27%     136.5MiB

I think ubuntu uses somewhere around 500mb.

I think you are right, it probably needs 2gb somewhat minimum, although i would like to push 1gb and see what happens. also it could be possible to host 2x workers on a single server (diluting the ubuntu ram load somewhat).

True,

The issue we face is that the demand is not the same each month, and is growing exponentially year. so the n8n needs we had last year dont match this year. getting a instance now, we will likely outgrow it next year

1 Like

@Chris_Bradley Understood let me give you a clear outline, start with the KVM1 plan and only upgrade when the demand of the n8n usage is high and according to the use case you can also select KVM2 machine which i personally use, and also you can always downgrade from KVM2 to KVM1 and same with all other options, this will be a better take for your usecase.

thanks @Anshul_Namdev

I know what you are saying around a more powerful single KVM2 and running Queue mode on single instance.

I do wonder if that is the actuality of the scenario, because I can imagine that the one time your VPS is under heavy workload all workers and the main n8n instance webhooks/etc will get affected, whereas if a workflow is particularly draining in a multi VPS setup, it will only affect the server it is running on…

I think there are honestly solid use cases for both…

1 Like

Indeed that is highly case dependent, i would suggest going with the hostinger KVM setup according to your use case.

over digitalOcean?

If you prefer the shared setup then digital ocean is a better take, but i still would personally stick to Hostinger with a self hosted enterprise level plan if you ask me according to your use case of sometimes high and sometimes very low. (Also Cost Effective)

reading Enable Prometheus metrics | n8n Docs

is prometneus build in n8n ?

Yes, Prometheus metrics support is built into n8n for self-hosted instances.

Thanks, re: hostinger and DigitalOcean - I guess I am more experienced in DigitalOcean, that is why i prefer but i will have a look as costing looks pretty good.

how does domain mapping work on Hostinger?

Although it is pretty much fine but consists some certain disconnection with domain sometimes, overall its all easy: