I’m doing a few stress / load tests to check if we can put n8n in production in one of our projects. While this, I’m stumbling on a few details.
I create this post to annotate and share the things I think worth mention. I will post more details as I go and try to finish it after I finish my tests, so this is a WIP. I may open some bug reports or feature requests after to handle some of the issues.
When you have a lot of executions in history (more than 3 million), it takes ages to load (like 10 min). Using Postgres(locally), because it has a better latency for executions than SQLite.
UPDATE: without the autorefresh on it’s faster, like just 50 secs ~ 1min30secs to load…
Workflow ID as int
Having the id of the workflow as int causes a lot of trouble. Every time I export and import a workflow that has “Execute Workflow” nodes I have to find out the new ids of each one and fix them in it. It’s a nightmare. Like, when you export all workflows, the “Execute Workflow” nodes points to the id of the sub-workflow you need. When you import all the workflows into another server, it seems to take the next id available. Even if the server is empty when it creates the new workflows if it originally skips a number because it was a deleted workflow, in the new set it will use the number, so the numbers get different.
There may be some memory leak in N8N. Using workflow based on real-life solutions, with more than 60 nodes and a few sub-workflows, the memory keeps increasing until node crashes with ‘out of memory’ (
Does anyone know tips of who to spot the memory leak ? Like tools that may help? My first bet would be on some node like Redis or MongoDB leaving open connections.
UPDATE: When using queue mode, that is one main process (with
N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true), one process for webhooks, and a few processes for the workers, only the workers’ process have this problem. Not much but it eliminates a little bit of surface for searching. By the way, all the workers crashed in a short time in the test (6 in 30 min of run).