Best practices for concurrent queries

I’m noticing some failures in my executions and I believe I’ve traced it back to a difficult issue.

My flow is getting a webhook for a new contact in hubspot. The purpose is to update some properties on that contact with n8n. I have no problem with n8n and hubspot. The issue is that one of the properties is a unique code stored in our database. So in my flow, after the webhook, I run a postgres query to get a code that hasn’t been issued yet (based on there being no date_used column value).

But I what think is happening is that when multiple webhooks come in at the same time, this DB query is running in two (or more) executions at once and returning the same unique code and saving that to a contact, which we don’t want.

Is there a way to space out the executions such that one needs to complete before the next one in the queue happens? Right now my query to get the code just grabs the one with the highest ID that isn’t used, but I could make this random, but there still could be collisions.

Any thoughts on how to approach this?

I would send all requests straight to a queue like rabbitmq and then process one by one from that queue. Rabbitmq trigger has a parameter to choose nr of parallel executions. So for your usecase set it to 1.

2 Likes

Thanks! I think that’s the approach I need to go with.

1 Like

I ended up using Hookdeck for this and have been really happy with their service thus far. It can queue the webhooks it receives and then slowly deliver them with whatever time spacing you need. I started using 15-20 seconds and that ensure n8n never drops anything.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.