When I use
Apache ab to benchmark workflow(workflow just make a http request to other website),
and server CPU usage so high
so many process(
/usr/local/lib/node_modules/n8n/dist/src/WorkflowRunnerProcess.js) was created.
WorkflowRunner use 'fork' to create subprocess , when WebHook always receive request, will make many subprocess , and CPU usage so high
Yes, that is right. It got implemented like that on purpose. By starting multiple processes it makes sure that if multiple workflows run at the same time they use all available CPUs and do not block each other (which would be a big problem for CPU intensive tasks). Additionally, does it also make sure that if one execution crashes, all other executions and especially the main process run uninterrupted.
Honestly, am I also not the biggest fan of it. Less however because of the CPU usage but more because of the startup time. Because of the separate processes does it now take around 1 second for a workflow to start, compared to being instantly when everything did run in the same process.
So any advice or pull-request how that can be improved is more than welcome.
If you think that the usual cpu usage is low, and you want to ensure that the workflow can fully use the CPU, you can start N (CPU core number) Nodejs main process instead of let the every workflow run as a child process.
Because 1 Nodejs process is the most Can only occupy 1 core.In the scenario described in my question, in fact, the CPU usage of each workflow subprocess is not sufficient.
Ah yes, that is right. But if I have 4 CPUs and 4 node processes running and in one node process I have two CPU intensive tasks and on the other 3 more or less nothing then it would use only one CPU and both CPU intensive tasks would block each other. If they run however in a separate process each can use a different CPU. And like also described above is it important that if one workflow crashes it does not cause problems for the main process or other running workflows.
I never used " Worker Threads" but I think they could be a possible solution. If I understand it correctly are they much more lightweight, run in the same process, memory can be shared and a crash should not take down the whole process.
I understand the scenario you are considering. Our team is now using n8n as the backend service aggregation layer, like an api gateway, or ‘service mesh’ , so the performance bottleneck is on the network io.
Now n8n starts the workflow by using the subprocess. When the request comes frequently, the CPU usage is very high.
Ah yes, can understand that. This will then be problematic.
Do you have experience in that field? Could “Worker Threads” solve that problem by still offering the advantages of separate processes which got described above?
I don’t have “Worker Thread” experience
I think ‘Worker Threads’ is no different from java’s multithreading. I like nodejs because it is a single-threaded model. Once multithreading is introduced, will encounter problems with other programming languages on multithreading. Problems like locks, synchronize.
In fact, when your scenario is cpu-intensive, regardless of multi-threading or multi-process, the final problem is the same: CPU usage in concurrent situations. The advantage of thread, only when starting, thread consume less resources than processes.
@shuimugan did just commit something which allows executing workflows in the main process. It can be activated like that:
It did not get released yet but you can already check out the code and test it like that. I hope it solves your problem. The memory consumption is that way now obviously much lower and workflows start immediately. Comes however with the mentioned trade-offs like just using a single CPU, workflows/main-process can block each other, …
Got now released with [email protected]
use docker or pm2 can run more instance.before n8n,we can use nginx as load balance
Sadly that is not possible as it would cause multiple problems. For example, would all trigger nodes execute for each process, if an active workflow gets changed it would only update in one process, …