Hello! Please help me set up my workflow so that it doesn’t crash due to memory issues. I’ve already read quite a bit on the subject and have tried to split the workflow so that memory can be freed up again. Unfortunately, this doesn’t seem to be working, and I don’t understand why.
My workflow looks like this:
The main work is done in the “InnerLoop” subworkflow. Only a string (file path) or almost empty objects ({empty: true}) are returned. It seems unlikely to me that these return values would fill up the memory.
My expectation was that after the loop had run through and a batch of “InnerLoop” had been completed, the memory could be freed up again. However, of the 17,000 items that need to be processed, only about 7,500 are completed. In Docker, you can see how memory consumption steadily increases. It ends with a crash and the error message PayloadTooLargeError. Why is that?
Information on n8n setup:
**n8n version:**1.123.5
Running via Docker