The browser is fairly limited with how much memory it can consume for one tab. Is this something you are seeing on the latest release or are you on an older version?
The best way to get around this would be to use multiple workflows to handle the data which can be tricky when creating it so when creating I would just use a smaller dataset and limit it to a few runs and that should get you through it unless the browser can be configured to use more.
using latest version and its in one flow, i cant restrict the data, i need all results to search, so wondering if maybe there is a way instead of holding it all in memory in the flow to dump the results into a file and then pull them back in again?
It will be in the memory of n8n but the problem is your browser tab is not able to handle it for some reason so it is the browser crashing not n8n so even saving it to a file won’t help as it looks to be client side not server side.
In theory checking the n8n logs would maybe back this theory as well, If you don’t see the out of memory issue we know where the issue will be. The question then is how is the browser configured to allow more memory to be used.
That is normally the easiest way to reduce memory consumption in n8n (obviously just a workaround, we have planned to be able to handle that generally better in the future). As things are running in a totally separate workflow, will it
Never send it to the browser
Garbage collects the data after the run of a subworkflow
You really just have to make sure that the workflow returns no data, or just very little. If the last node returns a lot of data it will end up in the main workflow again and will so eat up the memory again.
Apart from that, if you work with a lot of binary data, it is also worth activating filesystem mode (N8N_DEFAULT_BINARY_DATA_MODE=filesystem). That saves binary data to disk instead of keeping it in the execution data. Meaning instead of constantly having all the binary files in memory, it will just take a few bytes for each (for the reference to the file) if the file is currently not required and then load the file only when it is required.
I’m pulling in massive API data with loads of results like I said 1000 results at a time in 1 array and about 8MB and I need to call that over and over to get the the end of the results. This API uses header reply 206 to indicate there are more results
If you have a sub-workflow that expects as input some kind of cursor or start-index, then does the request and the last node then return that cursor/counter, then the outer workflow could then depending on the data with an IF-Node either loop that data either back into the sub-workflow again or simply do nothing.
Guess also 1000 results are fine, you said it crashes after 10-15 times. With the above described setup you could probably have a million loops of 1000 results and it would still not crash n8n or the browser (as all the data that does get sent to the browser is that cursor data).
I got the same problem on my HTTP-Request nodes. It seemes like the garbage collector is not working correctly on n8n at the time, when you have many HTTP Request executions in your workflow at the time. I have tried also several sub workflow executions, as far as possible, but without any luck.
The issue is not that the garbage collector is not working correctly, the issue is that there is no garbage to collect. That is a simple design decision. Any node can, at any point in the execution, access all the data of all previous nodes, and so does everything currently stay in memory.
The only time there is anything to clean up is after the execution is finished. For that reason, the above advice to run parts in a sub-workflow. Because then, after every sub-workflow execution (which is a totally separate execution), there is something to clean up.
We have plans to potentially change how that is currently handled but it is for sure a much larger rewrite and would then for sure also come with some disadvantages. For example much more load on the database and/or filesystem and slower execution times.
Yes I understand, it is of course the way to go to keep all data in memory so all future nodes can access it, but what I noticed is that even if the instances ressources are sufficient on disk and memory the containers crash on about 2GB usage even if they have 10 times the memory available to use.
I am not really sure if it is the garbage collector, this is what one of the n8n team mentioned with whom I spoke and made several tests on a memory-debug-image. Like mentioned on Github I got rid of 99% of my response Data of the HTTP-Request nodes, but still get the same errors. Hope it get resolved soon.