Strange, excessive memory usage

Describe the problem/error/question

I have a setup where the task has been divided into 6 nested levels :

  • 1 Split in Batches node in each workflow passing on data to the next nested level via the Execute Workflow node
  • Each nested workflow returns only a JSON object with a single boolean key

The task being done via this chain is:

  • Extract html from a website > get another list of webpages
  • From each of the pages, get the download link for the required zip file
  • Download the zip files
  • De-Compress it
  • Split the extracted csv files (size ~40MB) into smaller ones (size ~2MB) using the code node
  • For each of the ‘small csv file’, pass on the data to a local Postgres database using the Postgres node

As shown in the image below, the 1st wf is triggered manually which then causes a chain reaction:

  • Most of the work is being done in ‘Step 5’ & ‘Step 6’

The issue is that the memory usage is accumulating over the execution of the chain of these workflows (shown in the docker stats chart below).
From the charts: i expect the CPU usage to alternate between high and low.


But why would the memory usage get compounded?

The whole ‘nested subworkflows’ and ‘filesystem-based’ arrangement was intended to not cause this, right?

How to go about diagnosing this issue?


Information on your n8n setup

  • n8n version: 1.9.3
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Windows

Hi @shrey-42 :wave: This would depend on what your workflows are returning - n8n will clear the memory after a flow finishes, but this kind of behaviour might happen if you aren’t handling the return :see_no_evil:

Hey @EmeraldHerald ,
i’m returning only this, at the end of each flow:

Hi @shrey-42 - thanks for such a quick response! Can you try to change that to a Set node instead with the “Execute Once” option enabled and test if that makes a difference?

@EmeraldHerald Can try that as well, although, from the execution logs, i can see that each Execute Workflow returns only the above mentioned JSON object, so, i doubt that should make much difference?

Wild thought: could it be something related to the Postgres node?
Could the Postgres node be accumulating the memory usage across all the independent executions?

Because: currently, as part of failed-execution-retry, only the last 2 workflows are currently being executed. And the memory usage has quickly shot up again. The last workflow is basically the final csv-extracted data being passed on to the Postgres node.

Hi @shrey-42 - it might make more of a difference than you think, as I believe @MutedJam found an issue once where the Set node needed to be used to properly clear the memory versus a Code node. If that’s still happening, I’d need to let our engineering team know - and if it isn’t happening, it rules out an older issue resurfacing :sweat_smile:

I’m not too sure on your update re: the Postgres node, but it might also be worth digging into, especially if the Set node workaround trick fails.

Edit: Found the thread!

1 Like

I would also like to see all of the workflows and some sample data to test it with as I am not able to reproduce this on my local set up. I would expect to see the memory usage grow for a while though and when the collection kicks in it will drop it back down again.

I did do some testing on this fairly recently and it seemed to work ok but I didn’t try multiple nested workflows so there could be something there.

1 Like

Hey @EmeraldHerald @Jon ,

I now tried executing the workflow chain with the only Postgres node (in Step 6) disabled.

These are the stats (for the same data):


  1. With the Set node as the end node in each workflow:

For reference: the execution started at 22:22:30


  1. With the Code node as the end node in each workflow:

For reference: the execution started at 22:45:20


1 Like

Finally, i tried with the Postgres node enabled and a Set node as the end for each workflow.

This is the result:


I guess that settles it then: the Code node is indeed the leak and not the other nodes?

3 Likes

@EmeraldHerald @Jon

Btw, should i file a bug report regarding the memory leak from the Code node or is this already an open ticket?

Hey @shrey-42,

It is something we have been aware of for a while with the code node but have not fixed it yet, It could be down to the sandboxing at some point we will likely look into it in more detail to see what we can do.

Hi @shrey-42 - thanks for going into such detail with your investigation! Just to add on to @Jon’s point, I’ve also updated our internal ticket to mention this is happening to you on 1.9.3, and so that they can see your testing :bowing_man:

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.