Data processing limits - status code 413

Hi,

I have a SQL Server node which I am using to run a stored procedure which is returning 198k rows. I need to process this output through a workflow, but I keep getting a 413 status code error for most of the nodes I’ve tried to run after the SQL Server node.

I’m guessing that this is n8n complaining that the data payload is too big. I’ve tried a variety of mechanisms to break down the 198k payload, but no matter what I try I keep getting the 413 status error. Before I go any further, are there any tips for handling a large payload like this or am I beyond the limits of what I should be trying to do with n8n - if you say there is a method I will persevere, otherwise I’ll explore other options.

Thanks
Scott

Hi @scottjscott, chances are this is simply too much data to keep in memory. But you might want to check where the error originates as per this post:

If your n8n instance is still healthy, you might want to adjust your reverse proxy configuration.

Thanks - I’ll give that a go, although I’ve just realised I should have said I was doing this in n8n desktop (just testing), so I imagine I’ve hit a limit sooner than I might do ordinarily.

Hi @MutedJam, I’ve made a tweak to the configuration of my docker container, adjusting the docker-compose.yml to include the following setting under the labels section of the n8n container config:

  • traefik.backend.buffering.maxRequestBodyBytes=128000000

I don’t understand the detailed ins and outs, but this enables the “buffering middleware” and in my case I set the number of bytes to 128MB. You can read more about it here in the Traefik docs.

I’ve got some further optimisation work to do to speed things up, and if I run into anything material I will post it here, but for now, I’ve got a working solution so thanks for the tips.

Regards
Scott

4 Likes