MS SQL - Deadlock when issuing an update query

Hi n8n community.

I have a workflow where I pull events from an MS SQL table then create events in Google Calendars for users then write the Google Calendar Event ID back to the same SQL table using the update node option in the MS SQL node. Typically, this is processing over 1000 events each time it runs.

The update almost always errors/fails due to SQL deadlocks. I am keen to know if I can adjust the node somehow to slow it down. My guess is that the node is doing separate calls for each update that it does and is somehow not allowing enough time for the previous update call to finish.

The workflow works most times if the record count is significantly fewer.(e.g. 50 records).

Any ideas/advice welcome.
TIA
Matt

ERROR: Transaction (Process ID 51) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Details

Stack

RequestError: Transaction (Process ID 51) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
    at handleError (/usr/local/lib/node_modules/n8n/node_modules/mssql/lib/tedious/request.js:374:15)
    at Connection.emit (events.js:315:20)
    at Parser.<anonymous> (/usr/local/lib/node_modules/n8n/node_modules/tedious/lib/connection.js:832:12)
    at Parser.emit (events.js:315:20)
    at Parser.<anonymous> (/usr/local/lib/node_modules/n8n/node_modules/tedious/lib/token/token-stream-parser.js:37:14)
    at Parser.emit (events.js:315:20)
    at addChunk (/usr/local/lib/node_modules/n8n/node_modules/readable-stream/lib/_stream_readable.js:298:12)
    at readableAddChunk (/usr/local/lib/node_modules/n8n/node_modules/readable-stream/lib/_stream_readable.js:280:11)
    at Parser.Readable.push (/usr/local/lib/node_modules/n8n/node_modules/readable-stream/lib/_stream_readable.js:241:10)
    at Parser.Transform.push (/usr/local/lib/node_modules/n8n/node_modules/readable-stream/lib/_stream_transform.js:139:32)

Hey @Mr_Matt, you can sure reduce the amount of data processed by a node at a time. Here’s an example for a very different node, but the idea (SplitInBatches + possibly Wait) should work nevertheless:

Let me know if you run into any problems with this!

1 Like

Thanks @MutedJam. That worked a treat.

1 Like

Awesome, glad to hear this worked for you and many thanks for confirming :slight_smile: