Seems you have a big amount of data there. You will need to offload the high-consuming part of the workflow into a sub-workflow. I suppose it is that one:
In order to avoid memory overuse, you’ll need to loop it in the main workflow (the one that you are currently using), so in the sub-workflow, it only receives the parameters from the main workflow to get the “next” results. That will allow to release of the memory once the “page” is finished
In the screenshot above (console top) it did not look like the RAM was a problem. However I did now 3 subworkflows so in each of those only one Item is used. This works but wonder why and how can this happen. I think there must be something not working properly within n8n. I do not have havy data from at least from my perspective. Or it depends what is heavy data. In that modules just JSON output from one Item.
I can’t believe I’m the only one with that behavior? There must be more people loping over thousands of Items which is sometimes necessary…
What was also interesting for me was that in the first loop CPU got to less then 10% and with every loop it got more and more until it was at 100% and with loop 20 it did stay at 100% and the whole scenario hang.
To me it feels like there could be improvement how n8n handles this.