Issue:
My n8n Cloud instance was working perfectly yesterday. Today, the same workflows that ran fine are now failing with “out of memory” errors.
What changed: Nothing on my end. Same workflows + data.
What I’ve tried:
- Upgraded from 640MB to 1280MB RAM - didn’t help
- Disabled all workflows and restarted
- Enabled the same workflows that worked yesterday - immediate OOM errors
Questions:
-
Was there an n8n Cloud update that increased memory usage?
-
Is there a known issue with Code nodes and memory?
-
How can I get my instance working again with the same workflows that ran yesterday?
Any insight would be greatly appreciated! 
1 Like
Hi @flow-rida
1- There isn’t anything in the provided sources that confirms a specific recent n8n Cloud update that would suddenly increase memory usage for unchanged workflows, so I can’t say for sure that a platform change is the cause. What we do know from the docs and forum posts is:
- Cloud plans have fixed per‑instance RAM limits [Cloud plans]
- Execution stopped at this node. n8n may have run out of memory"… on Cloud almost always means your execution hit that plan limit, especially with heavy nodes or multiple workflows in parallel.[Cloud plans; Plan limits]
2- Yes, Code/Function nodes are explicitly called out as memory‑heavy and a common contributor to OOM problems.[Memory errors; Avoid OOM]
Recommended mitigations: Replace Code where possible with built‑in nodes (Filter, Aggregate, HTTP Request, etc.).[Avoid OOM; Heap error tips] or * Split large datasets using Split In Batches and/or Execute Workflow so each execution only holds a small batch in memory.[Cloud data; Avoid OOM]
3- Starter/Pro plans have relatively small per‑instance RAM; if your workflows are inherently heavy (large datasets, many nodes, Code nodes), upgrading to a higher‑memory plan is one of the recommended options.[Cloud data; Plan limits]