Known limitations in scaling the number of workflows per instance?

I am currently thinking about migrating all of my existent workflows from node-red (including lots of custom nodes) to n8n. One of my major drawbacks with nodered was the extensive memory leaking. It was impossible for me to run node-red without PM2 as it consumes tons of ram.

My question is: What kind of scalability issue will i face when using n8n in production? Are there any best practice guides about number of workflows you can manage in a specific environment?

My use case would be to have about 10-15 workflows running and processing about 50K-100K messages per workflow. Basically using it as a heavy api orchestration layer.

Hi @azngeek, welcome to the community!

There are no hard limits currently. The resource consumption depends very much on the data you process. We are planning to provide some benchmarks in the future, but for now the only way to get an idea of how much resources n8n consumes would be to test it yourself.

My use case would be to have about 10-15 workflows running and processing about 50K-100K messages per workflow. Basically using it as a heavy api orchestration layer.

Would all these message be processed in a single workflow execution or would this be multiple executions (as in one execution per API request)? If it’s the former you’d need a lot of memory during the execution. If it’s the latter you probably don’t need a lot of memory but you might still want to check out the respective documentation on how to scale n8n: Scaling n8n | n8n Docs

Either way you probably don’t want to store your full execution data (to avoid having your database grow very fast), so make sure to checkout the docs on execution data as well.

Thanks for the fast response. I did not expect that :slight_smile:

All messages would have a single entry point and would then be divided via queues etc. to other workflows. Maybe having a best practice guidelines would be helpful.