I deploy n8n (main mode) on Kubernetes, for the moment I have just 1 pods but I want to have several instances/servers.
So can I have 3 servers connected to the same database but in different pod ? (main mode)
Ps: they will be linked to the same domain (ex: n8n.mydomain.com)
For the queue mode, I will have a main server and several workers, but can I have several main servers and X workers ?
My objective is to have 3 main servers and X workers ?
Hi @sGendrot, this wouldn’t really work for n8n I am afraid
The reason is that changes to the database aren’t reflected by other instances until they are restarted. So if in your instance A you disable a workflow, this change would take effect immediately on instance A and is also written to the database.
However, instance B would not notice this change and would still assume the workflow is still active.
Could be an interesting feature request though (allow using multiple main instances in queue mode); I’ll convert your question into a feature request so you and other users can vote on it.
Just out of interest, have you hit any limits or problems with using a single main instance?
ok, thx for the answer. So I guess we have the same limitation (only one main instance) for the instances in main mode.
The problem: we prefer to have at least 2 replicas of every applications (if a pod is kill for any reason, the service is still available).
@sGendrot do all replicas need to be active at the same time or could you just run with an active-passive style setup and just fire it up if the other pod goes down?
for the disaster recovery, an active-passive is acceptable.
But I’m also concern about the capacity of the Main to deal with a heavy load. I have deploy Workers with the Redis but if the Main is overloaded, no new execution will start.
In theory the main node shouldn’t be overloaded as it would only be used to make new workflows and run test executions. Have you ran into any issues where the main does become overloaded?
The good news is because an active passive setup is all good you could just kill the main node and fire
Thx for your answer.
for the moment, we don’t have issue with it. We will conduct several load tests to be more confident with it.
I will also deploy Webhooks to scale them.
I will post the result of our tests.
I know we have done some testing internally recently for benchmarks but I am not sure if we are going to share this or if it is just for internal learning so we can see what we need to improve on.