Possible race condition with Supabase webhook

Describe the problem/error/question

Hi, I’m fairly certain I’m experiencing a race condition with my workflow initiated by a Supabase webhook. The webhook watches for an insert into my messages table to fire the n8n workflow:

  • gets a Supabase row from the table where status = ‘pending’
  • sends the data to the SendGrid node, which sends the email
  • updates the Supabase row with status and time sent

This works great with a single insert!

In another post I see the suggestion is to use something like RabbitMQ, but I’m confused why this would solve the issue. Wouldn’t RabbitMQ receive multiple instances of the same data too?

If there are thoughts on how to handle this without adding another piece to my stack, that would be great! I feel like I should be able to tweak my workflow in some way and I appreciate it if you can share your experience or ideas.

What is the error message (if any)?

No error message but I get multiples of each email. I inject a handful at a time to test.

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.56.2
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker hosted on elestio
  • Operating system: Ubuntu

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Well… I spent waaaay too much time on this so I just set the workflow on a schedule. I attempted RabbitMQ but this didn’t solve my issue.

In a nutshell, if I inject 4 rows in a table where a Supabase webhook is watching for inserts, Supabase will send 4 requests.

The workflow gets a record with a status of ‘pending’, so if Supabase is firing the workflow 4 times in parallel, the same record will process.

It would have been nice to have figured out how to filter duplicate IDs processing but I just couldn’t get it done.