Running n8n via (Docker, npm, n8n cloud, desktop app): docker
Operating system: linux
i’ve been facing similar issues before where wf triggered by cron node runs more than once with interval of few second in between, unfortunately there was no proper fix.
now it is impacting our production environment.
we are updating the wf based on our needs and saving the new version.
i have noticed my wf is being executed 7 times, and some of them are running old version of wf as well. so within these executions there are updated version of workflow, and old as well. and old version execution owerwrites the new vesrion. so its very critical now.
i have updated the n8n and its running on 1.79.3 at the moment.
i have tried to delete cron node – > save → create node again → save,
Could you provide the workflow json? Copy and paste into the section that appears when you click the ‘</>’ button.
Cron nodes notoriously have had historical glitches. Are you up to date? I have another theory based on the trigger at minute, but that is related to real cron, so I’d like to see first the setup.
i also working on debugging, and i have a thought - so cron triggered wfs are executing 7 times, same count of n8n pods i have in my k8s. i have 7 n8n pods, and 10 worker n8n pods… coincidence?
The forms will typically auto remove any credentials/PIIs.
Actually your point could be valid, if the k8s is not setup right to work as fallback rather then direct replication, are other scenarios being procced multiple times per trigger?
i have 2 workflows which runs daily started by cron node. one of the wokflow has sub-workflows, therefore all of them run multiple times, my count is 7.
another worrying case is that i ran the workflow manually and saw:
We have 8 wf which actively run on daily basis (2 of them with crons). When we had less pods running, they were occasionally restarting due to OOM error, sometimes it was just exit code. To avoid that from happening, i increased the replicas count.
Ive been restarting on deployment level. Also during upgrade to the newer v, its restarting as well - is it different than restarting at image level?
Ill try it out regardless
my theory has been confirmed. i did scale down the count of replicas to 2 of the main deployment, and i see 2 executions are happening. in this case if i scale down the main to 1, im afraid it will start to restart again. any thoughts? thank you very much for your help!
Personally I don’t have a lot of experience with k8s. I would assume your config may be incorrect as it sounds like its deploying active replications rather then using them as fallback.