Code node operates as expected on 1.6.8 - breaks on 1.7.03 / 1.7.1 (memory issue?)

Describe the problem/error/question

When executing a sub workflow (subflow) I seem to be hitting a memory limit using the Code node on N8N version 1.7.03 or 1.7.1. This N8N instance on a Starter plan trial (it’s a client’s account).

However, when I test the workflow on my own N8N Starter Plan (version 1.6.8) everything runs smoothly.

So this leaves me with two thoughts:

  1. Does the free trial have limited memory below the 320MB listed?
  2. Could an update somewhere between 1.6.8 and the latest versions cause this issue?

I’m hesitant to upgrade my N8N instance, as this could break the workflows that are currently working as expected.

What is the error message (if any)?

  1. Either no error message (and the code node simply never ends)
  2. N8N’s warning on memory usage.

Please share your workflow

I have shared the sub workflow below.

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.7.03
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): NA
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: MacOs Sequoia
1 Like

@Jon any idea on this? Sorry for the tag - would like to get an answer to the clients’ soon as I can.

hello @Alex_Stewart

What are your memory limits for n8n? It barely works if the memory limits less than 512 mb

Hey! I’m using N8N’s Starter Cloud plan, so, I think they allocate 324MB RAM? I could be wrong there, but the fact its working on my Starter plan account, but not theirs, and my workspace is on version 1.6.8 while theirs is on 1.7.03, has me thinking its a bug in the Code node.

Can’t say about cloud instances. But your workflow is very memory-greedy. Consider switching to the sub workflow and processing each batch in the sub workflow. This will release the memory once the batch is processed.

Plus the HTTP Node is able to handle pagination itself

Thanks for the tip on the HTTP request node - I had no idea. I’ll test with that. For the workflow, this is already a sub workflow, would you recommend splitting it even further? Would make management / debugging rather cumbersome.

@Alex_Stewart could you share me the account name on n8n cloud so we can investigate? You can DM me if you don’t want to share it publicly.

Sent you a DM with the info!

@Alex_Stewart I had a look at your instance and some of the crashes could be related to the new task runner feature we started introducing in 1.71.0. While it does improve the overall performance, it can increase the maximum memory a workflow needs during execution in certain cases. While it’s unfortunate that it can cause certain workflows to fail, it’s a deliberate tradeoff we had to make. We are still actively looking for ways to optimize the memory consumption so this type of issues could be avoided.

To solve the issue, I would recommend trying out the suggestions @barn4k mentioned. You can also find more details about how avoid memory related issues in our docs.

Thanks for the update, so would you recommend:

  • using the http request nodes pagination (surely this would have the same problem?) or would n8n handle the memory better?
  • adding another sub workflow (this would be 3 layers deep, main → sub → sub)
  • upgrading to a higher plan if none of the above works?

So using the http request node pagination feature still causes the workflow to fail, in fact it crashed my n8n workspace (this is using 1.6.8 and 1.7.1). I feel unless we upgrade the plan, the issue will persist, as if we return 100 pages at a time, the amount of data will simply be passed up to the main workflow that’ll then need to push it through the rest of the nodes, does that make sense?

the HTTP Request will consume less memory in comparison to the Code node, but if you have many pages, the core issue remain the same - the workflow/workspace eventually fail due to the OOM issue

Sub workflow is better, even if you have a long chain of subs, because they are being executed as a separate execution, so you can manage memory better. You may still experience a OOM issue when the subs finish their work, because the parent WF still waiting for the results from all subs. That may be eliminated if you will save each batch in a persistent storage, like S3 bucket and then works with S3 further. It depends on your use case.