Hi @ericsonmartin, I am very sorry to hear you’re having trouble here.
I had a quick look at the logs for your n8n cloud instance and its seems your workflow executions require more memory than available to the instance. This ultimately leads to a FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
error and a subsequent instance restart during which time you would see the Connection lost message you have shared.
Now I can’t look at the workflows themselves, but on a very general level, the factors increasing the memory usage for a workflow execution include:
- Amount of JSON data
- Size of binary data
- Number of nodes in a workflow
- Type of nodes in a workflow (the Function node specifically drives up memory consumption significantly)
- Whether the workflow is started by a trigger or manually (manual executions increase memory consumption since an additional copy of data is held available for the UI)
- Number of workflows executed in parallel
To address this problem, there are a number of options, which one is the most suitable would depend on your exact workflows:
- Increase the amount of RAM available to an n8n instance (this is mostly applies to self-hosting n8n, on n8n cloud this would require upgrading to a larger plan)
- Split the data processed into smaller chunks (e.g. instead of fetching 10,000 rows with each execution, process only 200 rows with each execution)
- Split the workflow up into into sub-workflows and ensure each sub-workflow only returns a limited amount of data to its parent workflow
- Avoid using the Function node
- Avoid executing the workflow manually
- Avoid scheduling multiple workflows to execute at the same time
This probably wasn’t the answer you were hoping for, but I hope it provides some guidance to addressing the problem.