Describe the problem/error/question
We have a n8n instance hosted on our AWS EKS Cluster, and we see occasionally issues with scheduler management.
- Case1: I have a workflow triggered on cron, I active the schedule, and then I need to make a change to the workflow to adjust a logic, but after I save the workflow, both old and new version were triggered on the same cron
- Case2: I have a workflow triggered by Gmail Inbox new email arrival. I need to make a change, so I disabled the schedule first, made the change and saved it. But then without me activating the workflow, it continues to get triggered for days. and when I activated the workflow during the time, it was triggered twice for each email and running both versions.
In both case, our Infra team rebooted the entire pod, and the issue was solved. but there are no notificaion or error they can find in logs, nor can I see if the scheduled change is success or not.
How can we avoid this issue, or if the issue is known and have no workaround, how can I get a list of “Actual” scheduled triggers list instead of relying on the workflow active and active schedule trigger code, which seems doesn’t always reflect the reality ?
Also if there are anything I should ask our Infra or Admin to look where the potential schedule change failure might occour?
Thanks
What is the error message (if any)?
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
-
Debug info
core
- n8nVersion: 1.111.0
- platform: docker (self-hosted)
- nodeJsVersion: 22.19.0
- database: postgres
- executionMode: regular
- concurrency: -1
- license: enterprise (production)
- consumerId:
storage
- success: all
- error: all
- progress: false
- manual: true
- binaryMode: memory
pruning
- enabled: true
- maxAge: 336 hours
- maxCount: 10000 executions
client
- userAgent: mozilla/5.0 (macintosh; intel mac os x 10_15_7) applewebkit/537.36 (khtml, like gecko) chrome/140.0.0.0 safari/537.36
- isTouchDevice: false
Generated at: 2025-09-29T13:37:53.472Z