High Concurrency PostgreSQL UPSERT Deadlocks & Connection Pool Exhaustion in n8n

I’m designing a high-throughput data ingestion workflow in n8n using the PostgreSQL node. The workflow processes 10k+ records per minute and performs batch inserts with ON CONFLICT DO UPDATE (UPSERT).I’m running into deadlocks, connection pool exhaustion, and occasional duplicate writes when multiple executions run concurrently. I’m also seeing performance drops when transactions stay open too long.
I’ve tried batching and limiting concurrency, but the issues still happen under load.

Describe the problem/error/question

What would be the best way to handle transactional integrity, retries on deadlock, and connection pooling in n8n for this kind of workload?

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

Hi @Keira_Becky

Every workflow execution runs independently. So if several executions fire at the same time, each one opens its own connection and transaction to PostgreSQL. When they all try to UPSERT at once, they start competing for the same rows — that’s where the deadlocks and connection exhaustion come from.

n8n isn’t doing anything “wrong” it’s just doing exactly what it’s told, in parallel.

Try this fix is simple:

Slow down concurrency

Use smaller batches

Add retries for deadlock errors

At this scale, stability usually comes from controlling how fast n8n runs not just tuning the database.

2 Likes

Okay thanks @Niffzy

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.