The cloud packages all have different memory limits so if you were on a start plan you would need to use a sub worklfow and manage the data in small chunks as you get it.
This would be something like having a Loop Items node that runs a sub workflow and passes in the pagination limits.
Ok, but I also tried to batch the dataset after I received it, and it also crashes. How can I make a subflow that can split the dataset in batches, without having it crash?
@LinkedUp_Online Is there a way you can split the items you receive from the api? If posible you can batch it before sendin it to the n8n?
Alternatively, get the data as a csv and read it as binary see it it avoids the crash
Does the API you are using have an option to only get a few items at a time? Normally it would be something like fetch 100 items > process them, Then fetch the next 100 and repeat.