I’m running a self‑hosted n8n Community Edition and facing problems when workflows process MB‑sized data (e.g., JSON or CSV files). The execution often times out, fails, or becomes unresponsive.
Setup:
n8n CE (self‑hosted)
Looking for advice on:
Best practices for handling larger payloads in n8n CE
Whether streaming/chunking is recommended over passing full data through nodes
Any configuration tweaks (memory, timeout, etc.) that can help
Add more resources (RAM, DISK), or optimize your workflow
No loop nodes, reduce Code nodes, and split the workflow into smaller ones. Use pagination whenever possible.
Keep in mind that n8n keeps all execution data while it is being executed. It releases the memory only when an execution finishes. That’s also valid for sub workflows; however, they are being processed as a separate execution.
Better to handle all “heavy” things in the sub workflow to keep the execution of the master workflow small.