Rapid Webhooks quering remote SQL to fast

Hi

I have a workflow setup that queries a remote mysql db for the last available value in a certain column.
My problem is that I have around 200+ webhooks coming in instantaneously and the result is that nearly all of the results returns the same ‘last value’.

So what i thought of doing was adding a random generated number and passing that value to wait node from a function like this:

My problem is that I am still getting a few instances with “last value” as the same. So i added a number of these two nodes at different points in my entire workflow and still I am getting one or two of the same last values.

Is there a better way of doing this ?

Hey @Surge.Media,

I think what I would do is look at making a queue system for the jobs, Maybe when the webhook is called check a database to see if there is anything in the queue if it is empty run a sub workflow that does the work, If there are items waiting add $resumeWebhookUrl to the database and wait for a webhook call.

In the workflow that does the work at the end of it you can check the same database and pull the webhook URL from it and trigger it with the HTTP Request node. I have not tried to set up something like this but in theory it should work with a bit of tweaking.

I could be missing an easier solution so it will be interesting to see what the community can come up with :smile:

I build a system where I create a record in my webhook table for every request that comes in and return the id. The (FileMaker) database then processes the requests one by one, and writes a record into a results table with the webhook id and the results. Just my 2 cents …

1 Like

Thanks both @Jon & @dickhoning for your answers. I am fairly new to n8n and do not fully understand your implementation suggestions.

Is there any chance you could elaborate further on the implementation here.

Thanks in advance

1 Like

Hi @Surge.Media the main concept of my approach, is to make the process asynchronous. In other words; have one part (/database) handle all the requests one by one, then store the results (in another database/table), and finally have another process check/fetch the results. But perhaps I’m giving the wrong advise, as I may not fully understand your requirements yet. Perhaps you can share your workflow?

2 Likes

What I was thinking is mostly the same as what @dickhoning is going for. If I get a chance I can work on popping an example together although I am not sure when that would be.

1 Like

Hi

I am pasting in the first part of the workflow before it gets to the sql node which is the one that is doing the queries. As you can see I am trying my best to randomize things, but even so I am still getting multiple duplicates from my sql query.

Unfortunately I am still not following the proposed solution implementation.

I am bouncing this to the top with a hope to find some kind of solution here as I am feeling a lot of pain with this :frowning:

Hey @Surge.Media,

I don’t have an example of building out a queuing system sadly so it is something that would have to be made.

I am not sure if anyone else has something they can share to do the job. Out of interest what is triggering the webhook calls?

Thanks @Jon . I understand.

Webhooks are being generated from our Mautic instance inside a campaign.

Hi All,

I am boosting this post as i have yet to have solved this issue and hoping maybe someone can show me the worklfow or how to create a que of sorts.

I have played with the different solutions, but cannot get my head around exactly how to build it.

So here’s hoping…

Hey @Surge.Media,

What about having the Webhook writing to Redis or similar to save the results then having another workflow on a schedule that checks for any data and processes it? That may get you around the issue.

1 Like