Is it possible to have a workflow restart itself at certain "breakpoints"

How to “restart” a flow after a particular branch runs?

Hi everyone,

I think this is pretty independant of any particular workflow.

I have a workflow that updated Google Contacts from with data from HaloPSA. I have a path for new users, removed users, and updates. I’m still working on the logic for updates, but the path is there.

What I think I want to do is run the workflow, and if the workflow makes changes to Google, it will essentially start over, pulling the data from both platforms again. It should have the changed data now.

If it adds records, it should see those and not try to add them again. It should then go down the next path with changes. You can see in the workflow the three paths. The create contacts and delete contacts paths seem to be working as designed. (Are updates really this difficult? I am looking at a switch with 6 to 10 outputs, building the JSON to update the Good record.)

I hope that’s more clear than mud. Here’s my workflow:

Information on your n8n setup

  • n8n version: 1.50.1
  • Database (default: SQLite): PostgresQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker on DO
  • Operating system: Ubuntu 22

Hi @russellkg,

To re-run the workflow after an update you could add an execute workflow node to the end of the update branch and have it execute this same workflow.
You’ll have to adjust the logic to make sure you don’t end up with an infinite loop or simply end up having a lot more executions than planned. I would use the IF node for this, to check for some criteria that confirms a successful update.

Something like this:

The way you have your update branch set up with the merge for In HaloPSA and Google will also make it so that you will get results already updated being passed again. I would add some logic like a filter node or a branch to remove or edit from Halo to make sure only non-yet-updated results get updated again

Let us know how you get on! :raised_hands:

2 Likes

hello @russellkg

Better to move the add/update contact functionality to the sub workflows, so it won’t be very big and difficult (plus it will remove some weird things with the Loop nodes which may break the linking). And you can link you end of particular branches with the start

1 Like

As of the flow simplicity, it’s possible to use the Code node ans some JS to remove all Merge nodes and pack all data there.

As of the Switch node, easier just to update all fields even if they are empty. You can set the field to some placeholder, like “n/a” or “-” if the property is missing:

or you may build the JSON directly and use it in the HTTP node

Hi @mariana-na,

I’ve used the sub-workflow many times, and I was hoping for (and received!) some brainstorming around what others would do. In previous lives, I’ve worked adjacent to actual programmers, but this is the first role I’ve had where I’m designing processes. I’ve been the guy that keeps the dev machines running so the developers have systems to work on for years. :slight_smile:

Let me see if I understand, with a little more context.

The flow will start by getting all the contact data from both platforms.

It then compares the contact data, matching on contact name, creating three buckets of contacts.

  • In HaloPSA only
  • In Google Contacts only
  • In both

Here is the decision maker:

My flow then goes down a branch to add, remove or update Google Contacts. When this process completes, the data it’s holding in memory is no longer complete. So, start over, pulling current data.

If the three compares return no data, then the flow exits.

Does that make sense?

-Russ

Hi Mikhail,

When I’m just starting to develop a flow, it starts out monolithic, and then I split it up. I have another process spreas over 12 flows. But some of those I’m treating like a subroutine. For example, the API token for one service expires after 24 hours. One flow checks the age of the token it has, if it’s older than 23 hours, it calls the website to generate a new one.

In theory, I could have many processes accessing the service this is working with. So I put that in it’s own flow to make it more portable.

But I digress.


I believe I’m asking about the same thing you are suggesting, and I’ve split the process into three workflows. Only Deleting will remain with the main flow as I can’t justify moving one node to a subflow. :open_mouth:

It occured to me that after the flow creates the contacts in the top line, the data it’s going to use for the middle line (if it goes there) is now out of date. So I want to get the data again.

I gave a more complete description replying to this thread yesterday. If you look at that, please let me know if I’m confused. :slight_smile:

-Russ

Thank you both for the ideas. It looks like it’s not currently possible to do what I want.

When we use the API with Google, the changes are not written to the tenant in real time. It appears there is a several minute delay after the request is sent. I found this out when I accidentally created an infinite loop. There’s also a race condition because Google delays the update.

So, because of this, I don’t think it’s possible to restart and get the updated data without a several minute (10? 20?) delay.

-Russ

you can place the Wait node and wait for the desired amount of time. Or you can run the workflow for every X minutes (or 1/2/3 hour) to sync the changes.

Oh yeah, I do that with other flows. I was just trying to avoid a race condition, and found I created one.

Thanks again.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.