On your self hosted install can you try setting N8N_PAYLOAD_SIZE_MAX to something like 32 and see if that helps? On cloud you may have to use smaller amounts of data as the default there is 16.
Hi @Jon Thanks, I made the changes and it works on my Docker instance now.
Is there a way to batch or filter in a beginning of your workflow to prevent this problem?
Or is it about the total payload in the workflow?
Last question, why is the workflow able to execute in one go, but will it crash on a “too large payload” after executing a single node. That just doesn’t make sense to me.
You could try using a loop to batch the data into smaller chunks, Don’t forget 700 items that just contain 1 character would be a lot different to 700 items that contains multiple columns and multiple characters in each column so trying to base something on the number of items can be tricky as it is data size of those items that can cause issues.
When you run a single node we sometimes post all of the data along with it which we don’t do if you run the entire workflow as they both different operations, I think in the future this may change though.