I created a flow starting with a webhook. The webhook receives information and changes data in a Google Sheet; this triggers calculations in the Google Sheet in order to send back the results once it’s calculated, at the end of the n8n workflow.
The problem is that the Google Sheet calculation takes some time to compute. Therefore when I am sending multiple calls to the webhook, the Google Sheet doesn’t have time to compute before sending the results.
Is it possible to wait for the n8n flow to be finished until processing the next webhook call ?
(I tried with a Split in batch but it’s not working as each webhook call is only one request, it’s therefore not possible to parse it)
Maybe best to have one workflow which simply writes the data directly somewhere like a Sheet or DB. And then have another workflow that runs as cron all few minutes and then processes all of that data at once.
Thanks for the answer Harshil but it’s not gonna work.
As the wait node is the same for every call, if I get 4 calls at the same time they will all wait 5 secs (for example) and trigger alltogether. This will lead to the same issue.
I would like to be able to store them in a bucket and trigger them one by one for example.
Btw. if you fear that multiple of that cron workflows could run at the same time. You can use the following workflow to check first if it is running already and only delay the next run until the existing execution finished:
I tried implementing it on my n8n cloud instance.
I then changed the “url”: “http://localhost:5678/rest/executions-current” to my url : “https://[hidden].app.n8n.cloud/rest/executions-current” but the flow is not valid (see below).
The answer from the http node doesn’t have the same info than with the local url - it’s a list of text. How is it possible to get the right info?
It doesn’t work in my case. Workflow execution is so fast that all instances see multiple executions in progress. My goal was to run the first iteration and ignore the following ones. But with this system, all iterations are ignored. It would be so much easier to prevent parallel execution at n8n level.