Error: Missing lock for job failed

Describe the problem/error/question

Since the update to latest version 1.32.2 I am seeing some failures in different workflows…

And I see these errors in the logs:

image

image

image

What is the error message (if any)?

Also this:

execution:
id: 3713632
url: https://n8n.xxxxxxxxx.com/workflow/V8YXNY9tAQ1eKox1/xxxxxxxxxxxxxx
error:
message: Cannot read properties of undefined (reading ‘node’)
stack: Error: Cannot read properties of undefined (reading ‘node’)
at Queue.onFailed (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/job.js:516:18)
at Queue.emit (node:events:529:35)
at Queue.emit (node:domain:489:12)
at Object.module.exports.emitSafe (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/utils.js:50:20)
at EventEmitter.messageHandler (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/queue.js:476:15)
at EventEmitter.emit (node:events:517:28)
at EventEmitter.emit (node:domain:489:12)
at DataHandler.handleSubscriberReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:80:32)
at DataHandler.returnReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:47:18)
at JavascriptRedisParser.returnReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:21:22)
at JavascriptRedisParser.execute (/usr/local/lib/node_modules/n8n/node_modules/redis-parser/lib/parser.js:544:14)
at Socket. (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:25:20)
at Socket.emit (node:events:517:28)
at Socket.emit (node:domain:489:12)
at addChunk (node:internal/streams/readable:368:12)
at readableAddChunk (node:internal/streams/readable:341:9)
mode: webhook

Information on your n8n setup

  • n8n version: 1.32.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): railway + docker image
    I run main + 15 workers (each 10 concurrency) in a dedicated instance with 32 vcpu / 32 gb ram

Hey @yukyo,

Did you update all of your instances at the same time?

Yeah, as always…

What are the errors you see in the UI? I suspect the lock messages are going to be around timeouts when talking to Redis.

Have you tried tweaking QUEUE_WORKER_LOCK_DURATION and QUEUE_WORKER_LOCK_RENEW_TIME already?

Which version did you upgrade from as well and is your redis instance looking healthy?