Hi,
I’m trying to build a reliable logic for preventing simultaneous workflow executions to avoid duplicated run. However, when I implement the possible solutions which are mentioned in community for that, I had some troubles.
- Using /rest/executions-current Endpoint:
It is a good workaround to prevent simultaneous executions. However, there is a two bottleneck on this solution.
-
Putting too much stres on n8n main. It causes “socket hang up” error. @krynble conveyed his thoughts on this matter well.
-
Not working as expected if workflow execution duration is so fast and trigger interval is frequent @Denis_VICTORIA as mentioned on a related topic.
- Using Redis or Similar Layers
This is also good workaround for this case. Just get and set status of workflow as ‘running’ or ‘idle’ to identify whether workflow is running already or not. This working as expected in my case but unfortunately, this is not simple as it seems. Because,
-
If the output of a node is empty, execution will stop without setting ‘idle’ of key in Redis. To overcome this, we must enable ‘Always Output Data’ option in almost every node. Then add ‘IF’ node that checks the output is empty or not, then connect it with both current flow and Redis set node.
-
If we have error workflow on a workflow, error workflow also must be configured to set status of key as ‘idle’.
Here is a sample that I trying to build:
As a result, these solutions becoming challenging. I’m thinking that it would be great if we have option on each workflow to prevent simultaneous executions.
Please type your thoughts or vote the below polling to help n8n team for prioritizing this or not.
- I prefer trying to push limits
- I prefer this to be built-in
0 voters