Hi All, I have a 55k line dataset that I need to export from MongoDB to an excel spreadsheet but receive an error 413 when sending to a spreadsheet node (workflow runs smoothly with less data). I have tried the split in batches node before sending to the spreadsheet node but could not make it work. Does anyone have any idea on how to solve this? What would be the ideal solution to deal with larger datasets export and spreadsheet? Split into multiple spreadsheets? How would this be achieved?
A 413 I think is a payload too large, how do you have n8n installed / configured and what version are you running?
Yes, 99% sure it is payload too large since the workflow runs smoothly with less data. Currently running self hosted version 0.136 on google cloud vm e2-medium (2 vCPUs, 4 GB memory)
It won’t be resource related it will be with the web service just need the info to work out if it is internal or something sat in front.
Hey @Sergio_Spieler,
It does seem to be an issue with the server you’re making a request to. Did you try adding a waiting time between the batches? Maybe that helps
I am querying data straight from mongodb atlas through mongodb node… the data flows normally to this node. After this, I use the set node to map de fields (32 fields) and then run through the spreadsheet node, where the error occurs… is there any specific number of rows/ data limitation with this spreadsheet file node? Would you recommend any workaround?
Do you have any reverse proxies involved like traefik, nginx or Apache? What version of n8n are you on as well?
Yes, we are using nginx with reverse proxy - n8n version 0.136
location / {
proxy_pass http://localhost:5678;
proxy_http_version 1.1;
proxy_set_header Connection ‘’;
proxy_set_header Host $host;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
}
n8n v0.136.0
Can you try increasing the client_max_body_size
option in nginx I think the default is 1MB, I suspect this is where your 413 error is coming from.
You can configure it in your virtual host option maybe try setting it to 200M just to see what happens.
Don’t forget to run nginx -t
before restarting the service to make sure the config is ok.
will try that and let you know. There are some random occasions that I receive a browser error (below) and the workflows breaks.
I suspect that could be down to the nginx config as well but one step at a time I think
I have tried increasing client_max_body_size to 100MB, but the browser keeps crashing as per image abova (something went wrong while displaying this webpage - error code 5).
A quick search online suggests it is something odd with Chrome, do you have another browser you can use as a test?