In workflow with postgres nodes (should be the same problem with other databases nodes), we update / insert / delete datas during the execution, with data manipulations, preparations or any other things.
If, for any reason, the worflow execution fails on one node (lets say a node near the end of the workflow), all datas inserted / updated / delete in database before this failed node is still inserted / updated / deleted.
But the worflow is not finished. We can not re-execute directly the same workflow, because all the previous datas are already computed, and we can not easily finish the worflow re-starting on the failed execution point.
One solution is to complexify (a lot !) the wokflows and try to handle all the cases if the data is already present, etc… Event in this case it’s not always possible.
Another solution could be to make the full worfklow execution one big transaction.
Ex: a SQL node ‘BEGIN’ (start transaction) at the beginnning of the worfklow and a node ‘COMMIT’ at the end. If the worfklow fails, it automatically rolls back.