Most of the work is being done in âStep 5â & âStep 6â
The issue is that the memory usage is accumulating over the execution of the chain of these workflows (shown in the docker stats chart below).
From the charts: i expect the CPU usage to alternate between high and low.
Hi @shrey-42 This would depend on what your workflows are returning - n8n will clear the memory after a flow finishes, but this kind of behaviour might happen if you arenât handling the return
Hi @shrey-42 - thanks for such a quick response! Can you try to change that to a Set node instead with the âExecute Onceâ option enabled and test if that makes a difference?
@EmeraldHerald Can try that as well, although, from the execution logs, i can see that each Execute Workflow returns only the above mentioned JSON object, so, i doubt that should make much difference?
Wild thought: could it be something related to the Postgres node?
Could the Postgres node be accumulating the memory usage across all the independent executions?
Because: currently, as part of failed-execution-retry, only the last 2 workflows are currently being executed. And the memory usage has quickly shot up again. The last workflow is basically the final csv-extracted data being passed on to the Postgres node.
Hi @shrey-42 - it might make more of a difference than you think, as I believe @MutedJam found an issue once where the Set node needed to be used to properly clear the memory versus a Code node. If thatâs still happening, Iâd need to let our engineering team know - and if it isnât happening, it rules out an older issue resurfacing
Iâm not too sure on your update re: the Postgres node, but it might also be worth digging into, especially if the Set node workaround trick fails.
I would also like to see all of the workflows and some sample data to test it with as I am not able to reproduce this on my local set up. I would expect to see the memory usage grow for a while though and when the collection kicks in it will drop it back down again.
I did do some testing on this fairly recently and it seemed to work ok but I didnât try multiple nested workflows so there could be something there.
It is something we have been aware of for a while with the code node but have not fixed it yet, It could be down to the sandboxing at some point we will likely look into it in more detail to see what we can do.
Hi @shrey-42 - thanks for going into such detail with your investigation! Just to add on to @Jonâs point, Iâve also updated our internal ticket to mention this is happening to you on 1.9.3, and so that they can see your testing