Prevent worker from picking up new workflows during graceful shutdown

Is there any way to prevent a worker from picking up new workflows while waiting for active executions during shutdown grace periods?

Background: We are running n8n version 1.16.0 in queue mode, deployed in Kubernetes with workers scaling using keda with redis triggers. The number of active workflows varies a lot causing the number of workers to scale up and down to compensates. The problem occurs when scaling down the number of workers. Since we have a few workflows which are very time consuming, I’ve configured a long grace period to allow the workflows to run to completion, but while waiting for the active workflows to complete the worker keeps picking up new workflow executions until the grace period ends. This causes the termination to almost always takes the entire grace period, and still cause executions to be terminated (they are still picked up by the remaining workers as expected, but it seems unnecessary and somewhat defeats the purpose of the graceful shutdown if it means I just wait longer for executions to be terminated prematurely).

2023-11-22T08:34:57.728Z | info     | Waiting for 2 active executions to finish... (wait 13 more seconds) "{ file: 'worker.js', function: 'stopProcess' }"
2023-11-22T08:34:55.728Z | info     | Waiting for 2 active executions to finish... (wait 15 more seconds) "{ file: 'worker.js', function: 'stopProcess' }"
2023-11-22T08:34:55.190Z | verbose  | Workflow execution started "{\n  workflowId: '232',\n  file: 'LoggerProxy.js',\n  function: 'exports.verbose'\n}"
2023-11-22T08:34:55.182Z | info     | Start job: 611 (Workflow ID: 232 | Execution: 101209) "{ file: 'worker.js', function: 'runJob' }"
2023-11-22T08:34:53.728Z | info     | Waiting for 1 active executions to finish... (wait 17 more seconds) "{ file: 'worker.js', function: 'stopProcess' }"
2023-11-22T08:34:51.727Z | info     | Waiting for 1 active executions to finish... (wait 19 more seconds) "{ file: 'worker.js', function: 'stopProcess' }"

Information on your n8n setup

  • n8n version: 1.16.0
  • Database (default: SQLite): postgresdb
  • n8n EXECUTIONS_PROCESS setting (default: own, main): N/A
  • Running n8n via (Docker, npm, n8n cloud, desktop app): k8s (image n8nio/n8n:1.16.0)
  • Operating system: Alpine Linux
  • n8n EXECUTIONS_MODE: queue

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @Eri, welcome to the community!

This seems like a solid improvement idea for the worker behaviour. @krynble is this already on the roadmap by any chance (or did I perhaps miss anything and this can be configured already)?

This sounds like a regression. The code to stop accepting new jobs during shutdown is there, but sounds like it’s not working as expected.

1 Like

Hello,

I have a subject about worker also. Is it possible (before to restart a worker) to disable it, meanning is still running but doesn’t take any jobs anymore. Like this, we can wait that all running workflow handle by the worker are finish before to restart it ?

I don’t know if it the case for everyone, but when I start my n8n instance with my workers each workers take approxymatively 120MB memory, but after a while (4/5 days running) each workers take 700/800MB memory (sometime some worker take more).

That why I would like to restart them perodicly but for that I would like to have a clean process that assure I’m not stopping running jobs.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.