There might not be enough memory to finish the execution

Describe the problem/error/question

I’m currently running a workflow that has multiple subworkflows. After a few iterations, the error mentioned above is returned. I’m ensuring that no information is leaked from any subworkflow, so the information should be cleared after each workflow to free up memory. However, after analyzing the memory, I noticed that it keeps growing. My question is, why isn’t n8n clearing all the information from a subworkflow after it finishes (except, of course, the information that is returned to the main workflow, which in my case is nothing)?

Information on your n8n setup

  • **n8n version:0.222.2
  • Database (default: SQLite):
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):own
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):Docker
  • **Operating system:Linux

Hey @jmta,

I have just done a bit of testing on this one, I have a parent workflow that generates some items then for each item executes a workflow that generates 10000 items and ends with a set node that runs once to return a bool.

So for the first test I did 10 items that generate 10000 each in a sub workflow and my memory usage looks like this…

The thing to note there is once the memory gets close to 700MB it has dropped down a bit then climbs again and once the entire workflow ends it drops down fully to the baseline.

So next up I did 50 items that generate 10000 each and I ended up with this…

So we have a better picture of what is happening the memory is growing to a certain point then it is clearing as it needs to, There is a slow grow in the baseline when it drops each time so test that more I have done one more test…

This one is 100 items generating the 10000 items in each sub workflow call…

It is a similar pattern with memory usage growing and dropping as it needs to, At the moment based on these it does look like everything is clearing up as it needs to.

Are you able to share the amount of items your workflows are dealing with and the memory usage graphs for your instance?

If it helps I have used the Debug Helper community node for my tests, This first workflow is the one that is called to generate the items…

And this is the workflow that calls it.

Could you maybe try the same tests as above in your environment to see what your results are? One thing I would do looking at the information on your setup is move away from SQLite and onto Postgres which will likely help with part of the performance.

1 Like

@Jon I was using node “code” (last element) with the following output

  "json": {
      Done: true

This was not properly clearing the memory, but when I used the SET with the settings you provided earlier, it optimized the memory correctly.

I thought it would do the same thing because the output is the same, but it doesn’t work the same way in optimization memory.

Solution : Use node “SET” with configuration above

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.