By default does n8n simply start a workflow if it is asked to. So meaning you could have at any given point between 0 and n. You do not really have any control over that. Then depending on the amount of memory your machine has it would crash at some point if n8n runs out of memory.
With worker processes, you can gain some control. n8n will only run as many workflows in parallel as there are slots on workers available but if they require too much memory, also they would crash.
If you want a formula it would be something like: [memory n8n main process] + [number of workflows running] x [memory required per workflow execution] < [RAM of worker]
The main process normally takes up around 100 MB.
How much memory a workflow execution requires depends mainly on the amount of data it processes.
Another consideration is how protected you want to be against crashes. If you have for 5 workers with 10 executions each, then each of that 10 could workflows could crash a worker, or the combination of them. If you run only one execution on each, then each execution could only take down itself but no other execution.
You also have to be aware that they share one CPU. So if you have very CPU-intensive workflows then running 50 in parallel is probably not a good idea as they would take forever to run. Additionally, would you have to provide also a lot of RAM to that worker.
So there is sadly no clear simple answer. It really depends on your specific needs. But generally, the fewer you run on a worker in parallel the lower the chance that a worker runs out of memory and crashes. At the same time does obviously also increase the chance that workflow executions get delayed.
thanks for the explanation, I’m now better informed than before
So, knowingly ignoring all the volatility of different workflows different memory requirements; or even better assuming the exact same load scenario;
I understand that "5* n8n worker 10 "approach is better than “1* n8n worker 50” approach just by merit of less risk of bringing everything down if one workflow causes a crash (or any combination).
I was asking to see whether or not you’d tell “launching 5 worker processes has huge overhead” or something like that.
I assume the real benefit of worker processes lie in running them multiple hosts, but I’m not there yet
thanks for your time and great product
Regarding overhead. Yes there is definitely one. How large it is depends on the RAM you give each worker. The more RAM, the less it matters.
Not sure how much you wanted to give each worker but considering that n8n without running anything needs at least 100MB, having a worker with like 300MB RAM would be much more wasteful than having one with 1GB.
You are welcome. And always great to hear if n8n is helful for people! Have fun!
Depends. If you did setup everything up in a way that the default parameters work, then yes. So if redis would be reachable via the default redis port (6379) and that via localhost and no password is set then you would not have to configure anything else on the worker.
You can find more information about the configuration on the page you linked originally:
I did everything by the book, but the redis required a password; I made sure to export that parameter, I assume the exported parameters are read by the worker as well, so shouldn’t be an issue there.
I just believe that it’s working OK,but how do I monitor whether the worker process is actually used or not? Is there a quick way of checking that, without setting up a huge load and monitoring cpu usage?