Memory Issues

Describe the problem/error/question.

I am using the “Split in batches” Node to break up the amount of items going through the workflow. The workflow performs fine whether I put 50 items or 100 items into the split in batches. It also performs fine no matter the batch size.
The issue is when the workflow goes through the last iteration and ends at the Split in batches, instead of pushing to the done output, it gives me an error saying it must be out of memory to run.
It is only once the split in batches detects it has no more batches to run.

I have tried returning from Airtable 100 records with a batch of 50 all the way down to currently returning only 12 from Airtable with a batch size of 2.

Question 1. Is this happening because I am in the testing phase?
Question 2. How do I stop this from happening?

What is the error message (if any)?

image

  • Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
  • Operating system: Windows

Generally, when i have to use split in batches, I’ve found it’s better to run everything in the loop in a sub flow to avoid memory issues.

I recommend doing this via a webhook, and passing any variables/data needed in the body of the webhook call. This keeps everything completely isolated.

Using a subworkflow should also work fine with the execute workflow node. Just remember to clear the data at the end so it doesnt get passed back to the main workflow

Thank you for your replies! @Luke_Austin and @BramKn
How do I do something like that?

So I would send off a webhook to the sub-workflow after the set node that is after the items list? Then once that sub workflow is complete trigger back to this one? and then repeat until complete?

To add more context I am scraping TikTok account feeds in this workflow but I have the same issue when scraping users followers.

So after a bit of experimenting, I was able to split the workflow up but it is not passing the data, I just keep getting the error “Set” node does not exist.

Main Workflow.

Sub Workflow.

In that scenario, your data reference will be at:

{{$('Execute Workflow Trigger').first().json}}

(if you want the first item, and are sending 1 at a time. Otherwise you’ll need a more creative solution)

I would suggest you keep the “Split in batches” in the main flow, and run the contents of the loop in the sub flow, which will run once per item, then remove itself from memory. As @BramKn said, make sure you clear the data at the end of each sub flow, so it doesn’t get passed back to the main flow. A set node like this:

At the end of the sub flow will just pass a single item:

“reset”: true

back to your main flow, which should now never run out of memory!

Voila

1 Like

@Luke_Austin Thank you very much! I will implement this right now! One final question.
You mentioned “you’ll need a more creative solution”’

What would a more creative solution look like if I where to send more than 1 at a time? Where would be the best place to look for more information on how to do this?

I suspect you’ll have to be more selective in your queries, if the APIs have an option to return less data (I don’t work with tiktok, but I imagine there’s a lot of metadata in that feed, of which you actually need very little).

You can use

{{$json}}

After the “Execute Workflow Trigger” Node to recieve all the items passed by the “Execute Workflow” node in the main flow, but if there is a lot there, you’ll run into the same memory problem. Remember JSON is not very efficient in terms of memory usage - and what seem slike a small amount of data, when converted to JSON quickly uses up your/cloud system resources. You’d be better using a set node before the “Split in Batches” with “keep only set” ticked, and strip the data down to only what you need, and in that case drastically reduce the amount of memory needed.

Alright perfect! Thank you very much for your help. After come creative use of the “Set” node and also my first time using the code node it now works flawlessly!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.