I set up a wokflow timeout, expecting that executions would would no longer last more than the timeout but still have example of execution lasting longer than the timeout.
As you can seen in this screenshot the timeout of the whole workflow setting is 30 seconds:
Hi @William_Guerzeder, afaik the workflow timeout is only checked between nodes (similar to the behaviour when you manually stop an execution).
Assuming the operation is slow because you are deleting a large number of files, you could consider breaking down your data into smaller batches using the Split In Batches node. So n8n would check whether the workflow timeout has been reached after each batch rather than after processing all items.
Consider these two examples:
This workflow will only be stopped once all items have been processed.
Batches of 10
This workflow can be stopped after each 10 items processed, like so:
But sometime, you know, one node last for ever because something went wrong and that’s what happended with my “delete file” node.
It’s disappointing that i can’t easily implement the “Fail fast and retry” principles setting up timeouts.
Thanks a lot for you time i’ll take a look to check how to works around the S3 default timeout using the HTTP node as you advice it to me in my other ticket.
No worries @William_Guerzeder, I can see how frustrating this is. Perhaps you might want to raise separate feature requests here on the forum that would address your pain points (something like “have timeout interrupt running node execution” and “add timeout setting to all nodes”)? This would allow other users to have their say on this and allows the team to consider implementing this going forward.