Most of the work is being done in ‘Step 5’ & ‘Step 6’
The issue is that the memory usage is accumulating over the execution of the chain of these workflows (shown in the docker stats chart below).
From the charts: i expect the CPU usage to alternate between high and low.
Wild thought: could it be something related to the Postgres node?
Could the Postgres node be accumulating the memory usage across all the independent executions?
Because: currently, as part of failed-execution-retry, only the last 2 workflows are currently being executed. And the memory usage has quickly shot up again. The last workflow is basically the final csv-extracted data being passed on to the Postgres node.
Hi @shrey-42 - it might make more of a difference than you think, as I believe @MutedJam found an issue once where the Set node needed to be used to properly clear the memory versus a Code node. If that’s still happening, I’d need to let our engineering team know - and if it isn’t happening, it rules out an older issue resurfacing
I’m not too sure on your update re: the Postgres node, but it might also be worth digging into, especially if the Set node workaround trick fails.
I would also like to see all of the workflows and some sample data to test it with as I am not able to reproduce this on my local set up. I would expect to see the memory usage grow for a while though and when the collection kicks in it will drop it back down again.
I did do some testing on this fairly recently and it seemed to work ok but I didn’t try multiple nested workflows so there could be something there.
It is something we have been aware of for a while with the code node but have not fixed it yet, It could be down to the sandboxing at some point we will likely look into it in more detail to see what we can do.
Hi @shrey-42 - thanks for going into such detail with your investigation! Just to add on to @Jon’s point, I’ve also updated our internal ticket to mention this is happening to you on 1.9.3, and so that they can see your testing