I have a workflow that should loop ± 170 times over the following nodes:
Get HTTP Request Results > Convert Results into desired format > Insert Results into MSSQL Server
The reason that it loops 170 times is that the API only allows me to get 100 records at a time and the data contains about 17.000 records.
The scrips works fine up until repetition nr. 80-85. Then the workflow keeps loading forever:
When I go into my VM to check the status of my Docker container I see that I get fatal errors telling me the JavaScript heap is out of memory.
So I ran the workflow again while monitoring the memory usage using ‘docker stats’.
This is the status at loop nr 85 when the workflow stops working:
As you can see the memory usage is very high, 2+ GB for ‘n8n_n8n_1’, which normally uses around 140 MB idle state.
The database-table takes about 5 MB of space when loaded into SQL Server, so it is not a huge table with lots of values and columns. What could I do to solve this problem?
As n8n converts everything into JSON (and so key → value pairs) will the memory required be much higher than in SQLite.
You can do one of two things:
Increase RAM
Depending on your workflows and what you are doing you could split up the workflow into two different ones to decrease the memory footprint. One outer one that does not have much data and for example only loops and gives start-index to another sub-workflow which then does the rest of the work.
Btw. if you have a lot of Function-Nodes trying to reduce the number and combining them into a single one could also be helpful as Function-Nodes have to always copy the whole data and increase so the memory footprint a lot.
Also executing a workflow manually (so with UI open) will increase the RAM required.