Memory Pressure

For my workflow processing 20–50MB JSON payloads, is increasing container memory sufficient, or does Node.js heap fragmentation require architectural adjustments cause I am confused

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Increasing memory helps but does not fully solve fragmentation.

Node.js:

  • Stores objects in heap memory
  • Performs garbage collection non-deterministically
  • Struggles with repeated large object transformations

Better approaches you should use is to :

  • Avoid unnecessary Code node deep cloning
  • Use streaming APIs where possible

Break workflows into smaller stages

Heap crashes are typically transformation-heavy, not just payload-size heavy

2 Likes

If I a understanding you correctly, I should follow some of this

Good point on fragmentation. For large payloads (20-50MB), streaming helps, but also consider breaking them into smaller chunks (5-10MB) and processing in parallel batches. Node.js struggles with single massive objects even with more memory. Also watch your Code nodes—if they’re doing multiple transformations, offload to database queries or external APIs instead. That keeps garbage collection manageable.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.