Cron trigger executing multiple times after updates due to “ghost triggers” in queue mode with multiple workers

Describe the problem / error / question

I am facing a recurring issue with Cron triggers executing multiple times when running n8n in queue mode with multiple workers.

This problem started in previous versions and is still happening in n8n v1.123.7. So far, I haven’t found any forum post or official guidance that provides a definitive or long-term solution.

The behavior strongly suggests that old Cron schedules remain internally registered even after the workflow is edited or the Cron interval is changed. I refer to these as “ghost cron triggers.”


Important clarification about errors

The errors shown in the execution list are intentional.

I added a manual “lock” inside the workflow to prevent multiple concurrent executions.
When multiple executions start at the same time, the workflow intentionally throws an error to block duplicate runs.

So the errors are a symptom, not the root cause.


What actually happens

  • The workflow has only one trigger, a single Cron node.

  • Initially, the Cron is configured to run, for example, every 5 minutes.

  • After some time — usually after an update or redeploy — the workflow starts executing multiple times at the exact same second.

  • The number of executions is exactly the same as the number of workers.

  • When I change the Cron interval (for example, from every 5 minutes to every 1 minute):

    • the new 1-minute schedule works correctly

    • but the old 5-minute executions keep running

  • If I remove one worker, exactly one execution disappears, which shows a direct 1:1 relationship between workers and duplicated executions.


When the issue happens

  • After updating n8n

  • After redeploying the stack

  • After restarting the environment

  • After editing an existing Cron schedule

Once it starts, it does not stop by itself.


What I already tested

  • Ensured Cron is not running on workers

    EXECUTIONS_MODE=queue
    N8N_ROLE=worker
    
    
  • Confirmed there is only one main instance

  • Restarted the environment in a controlled way

  • Verified the workflow has only one trigger

  • Verified no automatic retries are enabled

The issue still persists.


Temporary workaround

The only workaround I found is:

  1. Duplicate every workflow that contains a Cron trigger

  2. Delete the original workflow

This creates a new workflow ID, and the problem stops temporarily.
However, after the next update, the issue always comes back.

This strongly suggests that old Cron schedules remain registered internally, even though they are no longer visible or editable.


What I suspect (without claiming certainty)

I cannot say for sure whether this is related to cache, memory, Redis, or another internal mechanism.

What I can say with confidence is:

  • The issue is persistent across versions

  • It is triggered by updates

  • It is directly tied to the number of workers

  • It behaves as if old Cron triggers are never fully removed


Why this is a serious problem

  • Causes duplicated executions

  • Forces defensive logic inside workflows

  • Makes Cron-based automation unreliable in production

  • Requires manual duplication of workflows as a workaround


n8n setup

  • n8n version: 1.123.7

  • Execution mode: queue

  • Database: PostgreSQL

  • Redis: enabled (Bull queue)

  • Deployment: Docker (EasyPanel)

  • Workers: multiple

  • Operating system: Linux


Question to the community / core team

  • Is there a known issue related to Cron triggers not being fully deregistered in queue mode?

  • Is there a safe way to rebuild or reset Cron triggers after updates?

  • Is Cron officially supported in queue mode with multiple workers in this scenario?

  • Are there plans to address this behavior in upcoming versions?

Any clarification or guidance would be greatly appreciated.

Looks like you’re dealing with “ghost cron triggers” - a known issue in n8n. You could try manually clearing the cron trigger schedule in the node settings and re-adding it. This might help reset the internal schedule and prevent the duplicates.

Let me know if this works!

Yes, this does solve it. However, every time I update n8n, I have to clean up all workflows that use Cron by deleting them and recreating them for the issue to stop.

What’s strange is that this always happens after updating n8n to a new version. If I only disable the workflow instead of deleting it, the triggers keep running, one per worker I have. I’m looking for a definitive solution to this.

Can you add a script to your self host that will automatically prune old data and executions?

Same problem here, trying the same makeshift workarounds. I updated to the Stable version 2 hoping to fix it, but the problem persists

Found the root cause and fixed it.

The issue was that my worker containers were running with the default command (n8n start) instead of n8n worker. This made each worker behave as an additional main instance, registering and activating all cron/schedule triggers independently.

In my case (Easypanel/Docker), the “Command” field for the worker services was empty, which defaults to n8n start. The fix was simply setting the command to:

n8n worker --concurrency=5

After redeploying, workers now only pick up jobs from the Redis queue and no longer activate workflows or register cron triggers.

The clearest symptom of this misconfiguration is the number of duplicate cron executions matching exactly your worker count. If you’re seeing that, check your worker command.

Hope this helps anyone else running into this.

1 Like

Howdy

The ghost trigger problem in queue mode comes down to how n8n handles trigger ownership. In queue mode, Schedule and Cron triggers are meant to run exclusively on the main process, not on workers. If you have more than one main process running (which happens easily during rolling restarts, Docker Compose scale-up, or any deployment where the old container stays alive briefly while the new one starts), both main instances register the same triggers. When the old container eventually dies, those trigger registrations aren’t always properly cleaned from Redis, so they keep firing as “ghost” executions even after restart.

A few things that help:

  1. Ensure only one main process is running at any time. Check with redis-cli that you don’t have multiple n8n instances competing: look for duplicate keys under bull:n8n:* that relate to your scheduled workflows.

  2. After any update, stop all workers first, then stop the main, restart main, then bring workers back up. This order matters because if main restarts while workers are still running, the trigger re-registration can create duplicates.

  3. Set EXECUTIONS_TIMEOUT in your environment to bound how long any execution can run, which at least limits the damage from ghost executions.

  4. To clear stale Bull jobs manually between restarts:
    redis-cli KEYS “bull:n8n*” | xargs redis-cli DEL
    Only safe to do when n8n is fully stopped.

  5. In newer n8n versions there’s been work on leader election for trigger ownership in multi-instance deployments. Worth checking the changelog for your version - if there’s a N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN or similar flag in your version, that can help with clean shutdowns.

The real long-term fix if you need high availability is to run a single main with multiple workers, not multiple mains. Workers don’t own triggers, so scaling workers horizontally doesn’t cause this.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.