Out of Memory

Anyone know of a solution to this issue?

I have an HTTP API call to a API that pulls back around 1000 results with the json size being about 8MB, this then runs on a loop like 10-15 times to get all the results I need to then query.

However the browser crashes with this error as I am creating the flow, any ideas on how to stop that happening?

Hey @RedPacketSec,

The browser is fairly limited with how much memory it can consume for one tab. Is this something you are seeing on the latest release or are you on an older version?

The best way to get around this would be to use multiple workflows to handle the data which can be tricky when creating it so when creating I would just use a smaller dataset and limit it to a few runs and that should get you through it unless the browser can be configured to use more.

using latest version and its in one flow, i cant restrict the data, i need all results to search, so wondering if maybe there is a way instead of holding it all in memory in the flow to dump the results into a file and then pull them back in again?

It will be in the memory of n8n but the problem is your browser tab is not able to handle it for some reason so it is the browser crashing not n8n so even saving it to a file won’t help as it looks to be client side not server side.

In theory checking the n8n logs would maybe back this theory as well, If you don’t see the out of memory issue we know where the issue will be. The question then is how is the browser configured to allow more memory to be used.

So if it’s the browser i only need it to run the building of the flow the rest of the time n8n can handle it

So I need to potentially work out how to let chrome munch as munch memory per tab as it needed for a bit.

1 Like

You got it, I think Chrome might have a limit of like 2GB a tab or something but it has been a few years since I have looked into that.

You could put the content of the loop in a sub-workflow, which does not return any data. That should reduce the required memory both for browser and n8n backend a lot.

Ooh interesting not tried doing that before. Will have a play

That is normally the easiest way to reduce memory consumption in n8n (obviously just a workaround, we have planned to be able to handle that generally better in the future). As things are running in a totally separate workflow, will it

  1. Never send it to the browser
  2. Garbage collects the data after the run of a subworkflow

You really just have to make sure that the workflow returns no data, or just very little. If the last node returns a lot of data it will end up in the main workflow again and will so eat up the memory again.

Apart from that, if you work with a lot of binary data, it is also worth activating filesystem mode (N8N_DEFAULT_BINARY_DATA_MODE=filesystem). That saves binary data to disk instead of keeping it in the execution data. Meaning instead of constantly having all the binary files in memory, it will just take a few bytes for each (for the reference to the file) if the file is currently not required and then load the file only when it is required.

I’m pulling in massive API data with loads of results like I said 1000 results at a time in 1 array and about 8MB and I need to call that over and over to get the the end of the results. This API uses header reply 206 to indicate there are more results

problem being tho i need to build the flow to be able to reference it from another flow and it dies during the loop

i do however see this in my log console

If you have a sub-workflow that expects as input some kind of cursor or start-index, then does the request and the last node then return that cursor/counter, then the outer workflow could then depending on the data with an IF-Node either loop that data either back into the sub-workflow again or simply do nothing.

If you then also configure that sub-workflow to instead of requesting 1000 items a time just 100 or whatever you should have very low memory usage.

its one item with 1000 results in the 1 arrray each time

it does reboot n8n after this

Guess also 1000 results are fine, you said it crashes after 10-15 times. With the above described setup you could probably have a million loops of 1000 results and it would still not crash n8n or the browser (as all the data that does get sent to the browser is that cursor data).

1 Like

I got the same problem on my HTTP-Request nodes. It seemes like the garbage collector is not working correctly on n8n at the time, when you have many HTTP Request executions in your workflow at the time. I have tried also several sub workflow executions, as far as possible, but without any luck.

The issue is not that the garbage collector is not working correctly, the issue is that there is no garbage to collect. That is a simple design decision. Any node can, at any point in the execution, access all the data of all previous nodes, and so does everything currently stay in memory.
The only time there is anything to clean up is after the execution is finished. For that reason, the above advice to run parts in a sub-workflow. Because then, after every sub-workflow execution (which is a totally separate execution), there is something to clean up.

We have plans to potentially change how that is currently handled but it is for sure a much larger rewrite and would then for sure also come with some disadvantages. For example much more load on the database and/or filesystem and slower execution times.

Yes I understand, it is of course the way to go to keep all data in memory so all future nodes can access it, but what I noticed is that even if the instances ressources are sufficient on disk and memory the containers crash on about 2GB usage even if they have 10 times the memory available to use.

I am not really sure if it is the garbage collector, this is what one of the n8n team mentioned with whom I spoke and made several tests on a memory-debug-image. Like mentioned on Github I got rid of 99% of my response Data of the HTTP-Request nodes, but still get the same errors. Hope it get resolved soon.

1 Like