Schedule trigger spawning thousands of executions

After update to 2.13.4 new workflows with schedule triggers are spawning thousands of executions at once. Self-hosting on Docker. Please help. Schedule triggers are essential.

1 Like

had the same thing happen after updating once. schedule triggers get stuck if the execution history isn’t properly cleaned. first check your logs for repeating execution IDs — thats the smoking gun. are you self-hosted or cloud?

Hi @Curt_Weindel Welcome!
If those were legit and completely executed executions showing up, consider using a CRON job maybe that would put some help, also this should not happen so restart your instance and configure your time zone on top right 3 dots in your n8n flow area.

Time zones are good, I’ve restarted in coolify, stopped all executions including queued. This is something different than time spread or catching up. In the past 24 hours these two executions have spawned over 140,000 executions. I’ve also tried changing from every 5 mins to cron 5 minute spacing, same result. It will work properly sometimes if I run it with only the trigger and first node wired, then wire the rest after a manual execution. And, if I change something while it’s published and running that will trigger 10,000+ spawns, all at the same time, or at least at a fraction of a second too small to be documented. Claude is stumped too🤣

I even tried making a workflow that only contained a schedule trigger wired to a webhook that triggered the workflow with the issues, just in case another node was causing the problem, and that workflow acted the same way spawning thousands.

Thanks for the extra context. One thing to check: how many replicas is Coolify running for your n8n container? n8n’s main process isn’t designed to run in parallel — if you have more than one instance sharing the same database, they’ll both pick up schedule triggers simultaneously and you get exactly this kind of explosion. Scale to exactly 1 replica in your Coolify deployment settings. If you’re already on one, next step is checking for stuck records in the executions table left over from the 2.13.4 update.

Thank you. I’m not aware of replicas. I just made these flows yesterday

worth checking explicitly in coolify — go to your service > replicas/scale settings and confirm it shows 1. coolify can default to more than 1 in some configs even if you didn’t consciously set it. if you’re definitely on 1 replica, i’d check the n8n github issues for 2.13.4 — a mass-spawn triggered by workflow saves/updates sounds like it could be a regression in that release worth searching for.

One more thing, I actually had the problem yesterday while running on 2.8.3, decided to update because I thought it may just be a bug. Updated to 2.13.4 and got same results. Rebuilt workflows from scratch, same issues with schedule triggers. I’m not saying it’s n8n but if it is, this is the place to find out. Like I said I made a flow with only a schedule and we hook and it happened too.

It looks like if I execute manually first then hit publish they’re stable. If I just open the flow and publish, they freak out

good to know — rules out the 2.13.4 angle entirely. the execute-first pattern is the most useful clue yet: a manual run seeds the initial execution record the scheduler anchors to. without that record, n8n may activate the trigger before the workflow state is fully initialized in the db. keep manual-first as your workaround and i’d open a github issue with that exact reproduction step — it’s specific enough that the team should be able to pin it down.