I was in the v0.107.0 (from n8n cloud) and I just upgrade to latest stable version (0.123.1) and restart my instance, it removes all the “in progress” executions.
I think I’ve corrected what made it happen, so I guess this won’t come up again.
But, thanks a lot for the tip
Hi — I’m having the same issue with hundreds of executions at the same time since yesterday, but I can’t find a way to get rid of them all at the same time.
I am sorry you’re facing an issue. Can you share some more details? Are you on the latest version? Which nodes are you using in your workflows? Did you make changes before you started getting the issue?
Hi, when I clicked the Stop button, nothing was happening (just like on the print screen, the disabled stop square with the load icon).
My workflow used a webhook as a trigger, and I initially set the Response Mode to “On Received” in stead of “Last Node”. Once I did that change, I didn’t get the problem anymore.
I was able to reproduce the problem:
When building my workflow, in click on the run button for a “SplitInBatch” node. Then the problem showed up and I have now a new “endless execution” in my executions.
Can you please answer the following questions? This will help us better understand the problem. What is the size of incoming data?
Does the workflow run into an infinite loop?
Can you please share your workflow? Please make sure you are not sharing any sensitive data
Did you make changes before you started getting the issue?
No, this seems to happen after a daily cron job is executed. The cron job gets about 20k rows at most, and makes one http request for every row. After that the memory of the workers shoots up, and it stays up indefinitely.
There after we start seeing ECONNRESET errors coming from the HTTP node to random endpoints even 12 hours after the initial cron job.
Yes, I have two workers online (though kubernetes activates more if the CPU goes over the limit). They are working fine as far as I can tell. They are processing jobs. Even after deleting the workers and activating them again the issue persists
According to the provided information, I don’t think it’s a problem in the scaling process itself.
It looks like the data volume is too large and n8n might be having issues dealing with it.
Can you perhaps divide this cron into 2 separate workflows, working with 10k lines at a time, just so we can see if n8n works fine for a smaller dataset?
The main problem I can see is that while working with all the data, n8n is actually accumulating information in memory and you may be running out of RAM or n8n is struggling to continue due to memory limitations.
Do you have any memory monitoring in place to see how it is behaving?
Update: The HTTP nodes that end up failing, fail with this error:
NodeApiError: UNKNOWN ERROR - check the detailed error for more information
at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest.node.js:832:27)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/src/WorkflowExecute.js:424:47
We found and fixed a few issues around this. It turns out that those jobs are not actually running anymore, they just did not get correctly removed from an internal “running-list” and get so wrongly displayed as running. That is also the reason why they could not be stopped (as they were already). Hope we found all the places where it did happen.
I will update here once the new version with those fixes got released. That will likey be Saturday or Sunday.