Cron trigger executing multiple times after updates due to “ghost triggers” in queue mode with multiple workers

Describe the problem / error / question

I am facing a recurring issue with Cron triggers executing multiple times when running n8n in queue mode with multiple workers.

This problem started in previous versions and is still happening in n8n v1.123.7. So far, I haven’t found any forum post or official guidance that provides a definitive or long-term solution.

The behavior strongly suggests that old Cron schedules remain internally registered even after the workflow is edited or the Cron interval is changed. I refer to these as “ghost cron triggers.”


Important clarification about errors

The errors shown in the execution list are intentional.

I added a manual “lock” inside the workflow to prevent multiple concurrent executions.
When multiple executions start at the same time, the workflow intentionally throws an error to block duplicate runs.

So the errors are a symptom, not the root cause.


What actually happens

  • The workflow has only one trigger, a single Cron node.

  • Initially, the Cron is configured to run, for example, every 5 minutes.

  • After some time — usually after an update or redeploy — the workflow starts executing multiple times at the exact same second.

  • The number of executions is exactly the same as the number of workers.

  • When I change the Cron interval (for example, from every 5 minutes to every 1 minute):

    • the new 1-minute schedule works correctly

    • but the old 5-minute executions keep running

  • If I remove one worker, exactly one execution disappears, which shows a direct 1:1 relationship between workers and duplicated executions.


When the issue happens

  • After updating n8n

  • After redeploying the stack

  • After restarting the environment

  • After editing an existing Cron schedule

Once it starts, it does not stop by itself.


What I already tested

  • Ensured Cron is not running on workers

    EXECUTIONS_MODE=queue
    N8N_ROLE=worker
    
    
  • Confirmed there is only one main instance

  • Restarted the environment in a controlled way

  • Verified the workflow has only one trigger

  • Verified no automatic retries are enabled

The issue still persists.


Temporary workaround

The only workaround I found is:

  1. Duplicate every workflow that contains a Cron trigger

  2. Delete the original workflow

This creates a new workflow ID, and the problem stops temporarily.
However, after the next update, the issue always comes back.

This strongly suggests that old Cron schedules remain registered internally, even though they are no longer visible or editable.


What I suspect (without claiming certainty)

I cannot say for sure whether this is related to cache, memory, Redis, or another internal mechanism.

What I can say with confidence is:

  • The issue is persistent across versions

  • It is triggered by updates

  • It is directly tied to the number of workers

  • It behaves as if old Cron triggers are never fully removed


Why this is a serious problem

  • Causes duplicated executions

  • Forces defensive logic inside workflows

  • Makes Cron-based automation unreliable in production

  • Requires manual duplication of workflows as a workaround


n8n setup

  • n8n version: 1.123.7

  • Execution mode: queue

  • Database: PostgreSQL

  • Redis: enabled (Bull queue)

  • Deployment: Docker (EasyPanel)

  • Workers: multiple

  • Operating system: Linux


Question to the community / core team

  • Is there a known issue related to Cron triggers not being fully deregistered in queue mode?

  • Is there a safe way to rebuild or reset Cron triggers after updates?

  • Is Cron officially supported in queue mode with multiple workers in this scenario?

  • Are there plans to address this behavior in upcoming versions?

Any clarification or guidance would be greatly appreciated.