We’re very excited about the time saved insights on Enterprise! However, we’ve run into some situations that make it tricky to calculate.
Workflow challenge 1:
- Workflow contains a process that iterates over records, not just one record.
- If the workflow runs and saves 5 minutes when processing 1 record in the flow, the time savings is 5 minutes.
- If it runs and processes 10 records in the flow, the time savings should actually be 50 minutes. (n * time saved)
- The time savings should actually be multiplied per record in the execution, not per execution. Or, it should be applied at a node-run level based on the number of times a given node was called.
Workflow challenge 2:
- Given the above limitations, we started to explore modularizing the part of the workflow that processes over a given record.
- We split it out into a different workflow, with the intention to call it in a parent workflow to get it to execute and track 1x per record.
- The challenge with this is that time saved does NOT apply to sub-workflow executions, so the calculation never gets tracked.
Workflow challenge 3:
- In order to get a parent workflow to trigger a sub-workflow execution that actually tracks time, we decided to call it via a webhook.
- This is creating additional overhead, complexity and confusion to work around the time tracking issue.
- Beyond that - if we push changes to github and pull them into our prod instance, the base path of the URL we provided in the webhook call in the parent workflow does not change. It remains the dev env URL instead of updating to the prod env webhook URL.
- Our prod env is read-only, so the user can’t just update it when it gets to prod.
- The one workaround we found is to leverage variables to use as the base path for webhooks, but as you can imagine, this gets messy and is confusing for our users.
Solutions I think could work:
- Allow us to select whether we want to track the sub-workflow time saved instead of defaulting to NOT tracking it
- Allow us to select which node has a “multiplier” so it multiplies time saved against each trigger of the node
Has anyone faced similar? Any other workarounds in the meantime?