Server sizing requirements

Hi,

I have hosted this on a linux vm as of now for testing and everything is working fine. I want to know the server sizing requirement for production to support 1000+ running flows at the same time.
Any suggestions would be really helpful.

Is hard to answer. It really depends a lot what the workflows are doing, how long they run, how fast you want them to complete their job, … But no matter what think having 1000+ run at the same time is nothing that is possible right now. If you use the default configuration and we say that you want to have 128 MB for each process then the machine should probably have like 128 GB of RAM. If you set EXECUTIONS_PROCESS=main so that they all run in the same process then much much less (depends then a lot on how much data you process). But then the CPU becomes the limiting factor as it is then single-threaded and the 1000 workflows have to share 1 CPU.

So would say for that kind of workload you either have to wait till we add multi-tenancy or we add support for having workflows run serverless on something like Lambda.

Thanks for your reply,@jan

Any time line for multitenancy or serverless support?

@mahesh
If you are running in a virtual environment, you could always set up a Kubernetics configuration that would spawn new versions of the n8n server on demand as load required. You could then design one instance of n8n to act as a load balancer and redirect n8n workloads to each of the spawned n8n “workers”. As resource demands changed, you could then create or destroy workers are required.

I’m actually building a RPi cluster as a proof of concept right now to do something much like this.

1 Like

@mahesh sadly currently not clear when it will be ready. In the next weeks, the webhooks will however be updated which makes them stateless. Meaning that it will then be possible to at least scale n8n horizontally for webhooks. You can then simply start multiple instances and have something like Nginx in front which redirects incoming requests to a random one.