Do you know how to troubleshoot the problem? I have tried to put the Wrror Workflow, but it does not trigge, and the workflow did process huge amount of JSON data (per batch it is generating about 10MB Json file to put to MySQL):
I guess the huge amount of data is the problem. At some point it probably uses so much memory that the process (which runs the workflow) crashes and for that reason the “unknown reason” error.
To avoid that, it is best if you split it up into different workflows and call them via “Execute Workflow”. In your case I guess it would be the nodes between “Run in Batches” and “Continue Batches”.
If you use Webhooks, it starts the Workflow in another additional process. So n8n will use more memory than if you use “Execute Workflow” which runs the workflow in the same process.
No there is no timeout at all. When I google this error I find the following:
So it looks like that the problem is maybe that MySql closes the connection. No idea why it would do that. Maybe it has to do with this bug:
Maybe at some point it hits the connection limit and that causes then the problem (just a guess). This bug got already fixed and will be released with the new version tomorrow.
Played around with both “Webhook” and “Execute Workflow”, the unexpected error sometime still appear when I use “Execute Workflow” - maybe due to same memory limit so I changed it to Webhook, currently it have been running (using Webhook) well synchronizing between API and MySQL report for reporting purpose.
The EPIPE problem seems to go away as I increase the max connections from 150 to 1000, and max idle connections time to 30s - looking forward to the commit in the next version for the MySQL node (to close the connections after activity)
Loving this product as this make aggregating data between system, and acting as middleware/ translator for reporting much easier than my previous approach using excel and vba.
Ah very strange that it still crashes with “Execute Workflow”. Are you making sure that not huge amounts of data get returned with the last node of the called workflow? Because if the last node does not return only very little data (like by overwriting it with a Set-Node), all the data ends up again in the Main-Workflow and so we are almost back where we started.
So it seems then really seems to be the connection issue. Will later today release the new version which should then fix that problem properly.
Great to hear that you enjoy using n8n. Everything is still early, so hopefully, we can make many things easier and more stable in the future!
i have a similar issue with webhook array ayload coming in with about 35MB of json data, and then just trying to split into items and process, using docker I can see its not out of memory of CPU but the flow dies, any ideas on how I can process this, I am unable to reduce the incoming size of the JSON webhook payload.