I’m seeing executions that remain in a running state indefinitely even though the worker pod was terminated. Redis still holds the job. Is this a missing graceful shutdown hook?
Describe the problem/error/question
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Hi @Oluwakorede_Ojeyinka Welcome to the community!
I guess the workers are not correctly configured here in your case.
I guess try using EXECUTIONS_MODE=queue so that it gets configured for queue mode for workers also so they kind of get a graceful shutdown, as most likely it is being killed randomly.
Yeah this is a known thing with queue mode, if the worker gets killed hard (SIGKILL) it never gets a chance to clean up the job in Redis so it just sits there as “running” forever. You need to make sure your k8s config gives workers enough time to finish with a proper terminationGracePeriodSeconds and that n8n receives SIGTERM not SIGKILL. Also set EXECUTIONS_TIMEOUT=true and EXECUTIONS_TIMEOUT_MAX so stuck ones eventually get cleaned up automatically.