Describe the problem/error/question workflows which triggered by cron node are executing 7 times
What is the error message (if any)?
Please share your workflow
2 of wf here i canceled manually
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
Information on your n8n setup
- n8n version: 1.79.3
- Database (default: SQLite): postgres
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app): docker
- Operating system: linux
i’ve been facing similar issues before where wf triggered by cron node runs more than once with interval of few second in between, unfortunately there was no proper fix.
now it is impacting our production environment.
we are updating the wf based on our needs and saving the new version.
i have noticed my wf is being executed 7 times, and some of them are running old version of wf as well. so within these executions there are updated version of workflow, and old as well. and old version execution owerwrites the new vesrion. so its very critical now.
i have updated the n8n and its running on 1.79.3 at the moment.
i have tried to delete cron node – > save → create node again → save,
but this didnt resolve the issue.
please, advice
Could you provide the workflow json? Copy and paste into the section that appears when you click the ‘</>’ button.
Cron nodes notoriously have had historical glitches. Are you up to date? I have another theory based on the trigger at minute, but that is related to real cron, so I’d like to see first the setup.
i cant share the whole json code due to security restrictions, however here is portion of cron piece of code:
"id": "9b92b4df-b11d-4136-903c-9b34d2612f8a",
"name": "Schedule Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1.1,
"position": [
-2920,
1220
]
i also working on debugging, and i have a thought - so cron triggered wfs are executing 7 times, same count of n8n pods i have in my k8s. i have 7 n8n pods, and 10 worker n8n pods… coincidence?
The forms will typically auto remove any credentials/PIIs.
Actually your point could be valid, if the k8s is not setup right to work as fallback rather then direct replication, are other scenarios being procced multiple times per trigger?
i have 2 workflows which runs daily started by cron node. one of the wokflow has sub-workflows, therefore all of them run multiple times, my count is 7.
another worrying case is that i ran the workflow manually and saw:
it runs a lot of times even when trigger manually
n8n.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: n8n
component: deployment
name: n8n-deployment
namespace: n8n
spec:
replicas: 7
selector:
matchLabels:
app: n8n
component: deployment
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
labels:
app: n8n
component: deployment
spec:
containers:
- env:
- name: N8N_LOG_LEVEL
value: verbose
- name: N8N_LOG_OUTPUT
value: console
- name: N8N_LOG_FILE_LOCATION
value: /var/log/n8n.log
envFrom:
- configMapRef:
name: n8n-configmap
- configMapRef:
name: smtp-configmap
- secretRef:
name: n8n-secrets
image: n8nio/n8n:1.79.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5678
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: n8n
ports:
- containerPort: 5678
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5678
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 500m
memory: 1Gi
n8n-worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: n8n-worker
component: deployment
name: n8n-worker-deployment
namespace: n8
spec:
progressDeadlineSeconds: 600
replicas: 10
revisionHistoryLimit: 10
selector:
matchLabels:
app: n8n-worker
component: deployment
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
labels:
app: n8n-worker
component: deployment
spec:
containers:
- args:
- n8n worker --concurrency=5;
command:
- /bin/sh
- -c
envFrom:
- configMapRef:
name: n8n-configmap
- secretRef:
name: n8n-secrets
image: n8nio/n8n:1.79.3
imagePullPolicy: IfNotPresent
name: n8n-worker
ports:
- containerPort: 5678
name: http
protocol: TCP
resources:
requests:
cpu: 500m
memory: 1Gi
would you suggest any improvements in the deployment setup? we use postges db and redis cluster, they are written in comfigmaps
Why do you think you need 7 replications?
hi,
you need to restart the n8n container/stack that will most likely fix the duplications
We have 8 wf which actively run on daily basis (2 of them with crons). When we had less pods running, they were occasionally restarting due to OOM error, sometimes it was just exit code. To avoid that from happening, i increased the replicas count.
Ive been restarting on deployment level. Also during upgrade to the newer v, its restarting as well - is it different than restarting at image level?
Ill try it out regardless
my theory has been confirmed. i did scale down the count of replicas to 2 of the main deployment, and i see 2 executions are happening. in this case if i scale down the main to 1, im afraid it will start to restart again. any thoughts? thank you very much for your help!
Personally I don’t have a lot of experience with k8s. I would assume your config may be incorrect as it sounds like its deploying active replications rather then using them as fallback.