Describe the problem/error/question
I am receiving a dataset of 6911 records from an API call, but I get an error 413 and the workflow crashes:
I tried to split the data in batches after the request, but it also crashes.
Is there any way I can receive a dataset like this through an API without n8n crashing?
Would it help to run in on a selfhosted n8n instance with more memory?
Thanks in advance!
It does work in my self hosted version:
Is there a way to also let it work in the cloud version?
What is the error message (if any)?
Request failed with status code 413
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
- n8n version: n8n@ai-beta
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
- Operating system: Windows
The cloud packages all have different memory limits so if you were on a start plan you would need to use a sub worklfow and manage the data in small chunks as you get it.
This would be something like having a Loop Items node that runs a sub workflow and passes in the pagination limits.
Ok, but I also tried to batch the dataset after I received it, and it also crashes. How can I make a subflow that can split the dataset in batches, without having it crash?
@LinkedUp_Online Is there a way you can split the items you receive from the api? If posible you can batch it before sendin it to the n8n?
Alternatively, get the data as a csv and read it as binary see it it avoids the crash
Does the API you are using have an option to only get a few items at a time? Normally it would be something like fetch 100 items > process them, Then fetch the next 100 and repeat.
Thanks for the suggestions! I’ll have a look at the Apify API to see whats possible.