N8n + PostgreSQL Exactly Once Without UNIQUE Constraints or External Queues

Hey everyone
I’m building an n8n workflow triggered by a webhook that writes data into PostgreSQL and then processes it further. Under higher load, I’m seeing the same payload get processed more than once when multiple executions run in parallel, even though retries are enabled.
Unfortunately I can’t add a UNIQUE constraint to the table because I’m working with a legacy schema.

Describe the problem/error/question

I would love suggestions on how to make sure each payload is processed exactly once with n8n’s execution model (parallel runs + retries + possible restarts) and without relying on Redis, external queues, or UNIQUE constraints.

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

INSERT INTO my_table (id, payload, created_at)
VALUES ({{$json[“id”]}}, {{$json[“payload”]}}, NOW())
ON CONFLICT (id) DO UPDATE
SET payload = {{$json[“payload”]}}, created_at = NOW();

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey there @Roseline

The problem happens because n8n runs multiple workflow executions at the same time. Each one opens a transaction, and when several try to UPSERT the same row, PostgreSQL can deadlock.

You can fix it by locking the row in a short transaction:

BEGIN;

– Lock the row
SELECT id FROM my_table
WHERE id = {{$json[“id”]}}
FOR UPDATE;

– Insert or update safely
INSERT INTO my_table (id, payload, created_at)
VALUES ({{$json[“id”]}}, {{$json[“payload”]}}, NOW())
ON CONFLICT (id) DO UPDATE
SET payload = EXCLUDED.payload, created_at = NOW();

COMMIT;

you can also add a retry in n8n if a deadlock occurs. This simple pattern usually keeps workflows safe without changing your legacy schema or needing external queues.

2 Likes

Hello @Roseline ,

Since you can’t lock the row (because it might not exist) and you can’t add a unique constraint, you have to lock the ID itself at the application level.

You can use PostgreSQL’s pg_advisory_xact_lock to lock the hash of the ID. This forces one execution to wait for the other, even if the row hasn’t been created yet.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.