N8n Workers (v1.94.1) Incorrectly Starting Active Workflows in Queue Mode despite Correct Configuration

Hello n8n Community,
I’m encountering a persistent issue with my n8n setup in queue mode using Docker Compose (n8n version 1.94.1). My worker instances (n8n-worker-1, n8n-worker-2) are attempting to start and manage active workflows, a role that should be exclusive to the main n8n instance.
Current Setup:
n8n Version: 1.94.1
Architecture: 1 n8n (main) instance, 2 n8n-worker instances.
Infrastructure: Docker Compose, Redis for queuing, PostgreSQL database.
Key Environment Variables:
n8n (main) instance:
N8N_PROCESS=main
N8N_DISABLE_ACTIVE_WORKFLOWS=false
EXECUTIONS_MODE=queue
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
QUEUE_BULL_REDIS_HOST=redis (and other Redis connection vars)
QUEUE_HEALTH_CHECK_ACTIVE=true
n8n-worker instances (e.g., n8n-worker-1):
N8N_PROCESS=worker
N8N_DISABLE_ACTIVE_WORKFLOWS=true
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=redis (and other Redis connection vars)
N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true
N8N_SKIP_WEBHOOK_REGISTRATION_ON_STARTUP=true
Problem Details & Troubleshooting:
Initial State: Both main and worker instances were trying to start active workflows, or the main instance wasn’t.
Progress: After explicitly adding EXECUTIONS_MODE=queue to all n8n instances:
The main n8n instance now correctly shows Start Active Workflows: in its logs and lists the active workflows. This is a positive development.
However, the n8n-worker instances STILL show Start Active Workflows: in their logs and list all active workflows, despite having N8N_PROCESS=worker and N8N_DISABLE_ACTIVE_WORKFLOWS=true.
New Warning in Workers: The worker logs now show a deprecation warning related to OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS, advising to set it to true. I have confirmed this variable is set to true only on my main n8n instance, not on the workers. It’s strange that workers are reporting this.
I’ve confirmed via docker exec [container] env that all the above environment variables are correctly set on their respective containers.
The ~/.n8n/config file in the persistent volume is minimal (only contains encryptionKey).

Question:
Despite the main instance now behaving as expected regarding workflow activation, the workers continue to ignore their designated role. Has anyone experienced this specific behavior with n8n version 1.94.1 where workers don’t respect N8N_DISABLE_ACTIVE_WORKFLOWS=true even when EXECUTIONS_MODE=queue is set and N8N_PROCESS=worker is defined?
Could this be a known issue or a specific behavior of version 1.94.1?

Any insights or suggestions would be greatly appreciated!

SOLUTION:

Joffcom: As a starting point you are setting N8N_PROCESS to worker which isn’t a valid option, I am not sure where you got it from but I assume you have used AI to try and help. In your Compose file you have also not told the workers to start as workers so they are running as main instances which is why you are having this issue. you would need to add command: worker to your compose file.

1 Like

Thank you so much for your previous help! Adding command: worker to my worker services in Docker Compose completely resolved the issue of workflows being triggered multiple times. All instances are now behaving as expected regarding their roles (main starts active workflows, workers process jobs from the queue and do not start active workflows).
I have a follow-up observation regarding concurrency. My setup is:
n8n (main instance): N8N_EXECUTIONS_PROCESS_MAX_CONCURRENT_EXECUTIONS=5
n8n-worker-1: N8N_EXECUTIONS_PROCESS_MAX_CONCURRENT_EXECUTIONS=10 (and command: worker)
n8n-worker-2: N8N_EXECUTIONS_PROCESS_MAX_CONCURRENT_EXECUTIONS=10 (and command: worker)
This gives a theoretical worker processing capacity of 20 concurrent jobs, plus whatever the main instance might handle (though most should be offloaded).
When I monitor the active job locks in Redis using SCAN 0 MATCH bull:jobs:*:lock COUNT 1000, I consistently see around 15 active locks when the system is under load (e.g., multiple users triggering the backend workflows). I haven’t observed it going significantly higher, towards the theoretical 20 active jobs processed by workers.
The workflows involve calls to LLMs, so they are I/O bound with periods of waiting. The VPS has ample resources (32GB RAM, CPU utilization is moderate). The bull:jobs:wait queue in Redis is sometimes populated, indicating there are jobs waiting to be processed.
Is there any other internal mechanism, configuration, or default behavior in n8n’s queue/worker system (using BullMQ) that might be limiting the total active jobs processed by workers to around 15 in this scenario, even if individual worker capacity is set higher and there are jobs waiting in the queue? Or is this likely an indication that the current load طبيعيmente se estabiliza en ese nivel de concurrencia con mi actual flujo de entrada de trabajos?
I’m trying to understand if I’m hitting an unexpected ceiling or if this is normal behavior given the dynamics of job arrival and processing.
Thanks again for your invaluable assistance!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.