URGENT:n8n workflow Deadlock Causing Production Outage

I am facing a critical issue in high volume n8n workflow that’s disrupting our order from processing system :
We observed failures like workflow ls suddenly hang in running state for a long time and give this error of

[ postgresSQL Error: SQLSTATE[55P03] - could not obtaine lock on row in relation "oders "

I i checked n8n logs but it is only showing generic lock timeout

Secondly also simplified test workflow and problem still persist with just the Update Node

How can we immediately unlock the stuck workflow without restarting n8n and what is the proper way to handle row-level locking for high-throughput workflows

1 Like

Hi , i think you should first clear the existing Deadlocks

SELECT pg_terminate_backend(pid)
FROM pg_locks
WHERE relation = 'oders'::regclass
AND mode = 'RowExclusiveLock'
AND granted = true;

run the abvoe in the postgreSQL to kill sessions

secondly go to n8n ui section and cancel all stuck executions manually
also add a function node before your PostgresSQL

return 'SET LOCAL local_timeout = '5s';

note that : the above is just a temporary fix since it is urgent !
but you can also go with the below , it might last longer

ALTER DATABASE you_db_name set lock_timeout = '2s';
ALTER ROLE n8n_user SET statement_timeout = '5s'

please try the above and if it works please give a feedback

2 Likes

I think it working for now , since it’s temporary i am still curious but now it working fine thanks :blush:

1 Like

alright , for now i think the best solution is for you to monitor and add some alert rules

1 Like

Thank you !!!