Managing a query that's too large and generates a status 413 error

Describe the issue/error/question

Hi ! I am currently working on a large dataset that I need to analyze. I use an API call and the output is approx 12.5k records ; the resulting file’s size is approx 20 MB. As a result, the next node in my workflow generates a 413 error code (or the one afterwards).
From my understanding, this could be solved by adjusting the configuration of our local installation of n8n by increasing the max volume accepted or memory.
I’m looking at this option, but I’d also like to explore other solutions. Similar topics on the subject seem to have been solved by increasing the payload limit (see 1 or 2) instead of splitting the output.
I’ve tried a few solutions such as two Item Lists Node, one to keep the X first items and the other the 12.5k-X last items but the workflow still crashes. Similarly to scottjscott in 1, I’m wondering if there is such a method in n8n itself or if I should try explore other options (pending that changing our local config does not work).
One thing I could see would be to use an HTTPS request to get records and use the offset value provided by the API to iterate over my records… except that the HTTPS request has a hard limit of 500 records / call, so that would mean 14 iterations that the workflow needs to account for.

What is the error message (if any)?

Request failed with status code 413

Information on your n8n setup

  • **n8n version:**0.160.0, self-hosted (don’t have the details for now)

Hi @AlexPerrin, processing 500 requests at a time as suggested by you seems like a good approach to me. Are you encountering any specific issues with implementing or executing such a scenario?

Hi @MutedJam,
Indeed, I was wondering if an implementation similar to this would move the problem to the end of my workflow since I would end up using the same amount of memory, or generating a file that’s too big to handle once again.

I think I should be able to bypass that if I write the results of each call & transformation to my end result (a GSheet currently) 500 by 500. I guess my question is if doing it like this would not risk putting a similar load on N8N as what I’m doing currently after 8 loops, for example.

I’ll try it out thi afternoon and come back to you if I still get a 413 request. thanks !

So if the error occurs immediately after reading from the Google Sheet this will be a bit tricky (you’d essentially need to read 500 rows at a time).

To avoid sending a lot of data to your UI at once (which is what causes the 413) you could use an approach of creating a parent workflow holding very little data itself (maybe just the ranges of each block of 500 items in your Google Sheets). You could then use the Split in Batches node to split this data into batches of 1 and call a sub-workflow through the Execute Workflow node.

In your sub-workflow, process the 500 items and then make sure to clear out any data (e.g. through a Set node executing only once) at the end.

This way n8n would never process more than 500 items at the same time.

Parent Example

Sub Example

In your parent workflow you’d only see a single result for each batch which should be a manageable amount of data:

My example processes only 10 items per batch, but that’s because my example sheet is too small for 500 items per batch :wink:. I hope this nevertheless helps.

Thanks a lot for the detailed explanation, I’ll put it in practice :slight_smile: