over the last few weeks we’ve noticed a worrying pattern with our self-hosted n8n (running in Docker). When the container starts it’s snappy, but after a day or two every workflow takes longer to execute, and the Editor UI becomes sluggish. Eventually the whole instance is so slow that we’re forced to restart the container. A restart fixes things for a short while, then the slowdown creeps back.
What we’ve observed:
Gradual performance drop – executions that normally finish in seconds start taking minutes.
All flows affected – even very small sample workflows slow down.
n8n version – currently on 1.91.2
Our suspicion is a memory-leak somewhere ,but before digging deeper I wanted to ask the community:
Has anyone else experienced a similar “slow death” of their n8n instance?
If so, did you track it down to a specific node, workflow pattern, or n8n version?
**Are there recommended tools or settings for profiling memory usage in n8n **
Any hints, war stories or ideas on how to isolate the culprit would be hugely appreciated. If log snippets or more environment details would help, just let me know and I’ll share them.
I’ve read a similar complaint somewhere. My instance isn’t busy enough to notice yet probably, but especially if you have any large-data workflows, all of the results of those are saved. You might consider changing some of these settings in your workflows to reduce how much is saved.
As you’re using docker, you might be hitting resource limits for your containers. Check the Settings, Resources to make sure there’s enough available to your instance. I did have this problem and just increased the disk and memory available to Docker.
Around 4.5k Executions per day. The executions are quite big. We are using it as an enterprise. For example: Scraping entire CRM and syncing it into something else. So accessing several thousand entries of customers.