i try to do some ETL processes in a workflow, my first run with 1000 rows runs smoothly and fine.
After that i changed the Limit in the SQL Statement and run the same Workflow with (46000 rows) and the workflow generating following error:
Problem running workflow
There was a problem running the workflow: Request failed with status code 413
My Workflow is only a SQL Statement and a Functionsitem Block with one Javascript Regex Function.
Status code 413 means Payload Too Large. So 46k rows are apparently simply too much data. In this case you would have to split it into multiple runs with fewer rows per execution.
Sorry actually do not know where that error comes from. If from n8n (so Node.js) or the reverse proxy. But considering that apparently n8n crashes even with 10k rows, as it probably runs out of memory, does increasing the request limit not really help anyway.
Hey, I’m already having 3 items per row. The thing is that some contain binary data (pdf attachment) that may be a couple of KB big. But I’m executing anode that is not reading those fields, do the still count? Do I need to remove those fields in the pipeline?
Hi @danielo515, I am sorry you’re having trouble. This is a rather old topic that’s already marked as solved. Going forward you might want to open a new thread in such cases, as it’s easy to miss follow up comments here (and also because n8n changes a lot over the course of multiple years).
That said, the answer to your question is (most likely, seeing I don’t know your workflow) yes. Even when not reading (or otherwise processing) a specific field, n8n would keep this data (and send it to the UI when executing your workflow manually). If you are currently seeing 413 responses you would want to adjust the N8N_PAYLOAD_SIZE_MAXenvironment variable in n8n accordingly and double-check if your reverse proxy (if you are using one) handles the respective payload sizes as well like Jan suggested above.