Custom cron in schedule trigger unexpecting behave even workflow is disabled

Describe the problem/error/question

I am using Scedule Trigger and configured it custom cron. I deactived my workflow but my triggered custom cron (*/2 * * * *) still working. I can see executions in history. Also when I change cron run period for example */4 * * * * then I see that two executions happen in same time. My execution mode is queue. I have one n8n for manager and one worker for jobs. I dont see any error log even log level is debug. Is this a bug or something that known in custom cron at scedule trigger. What should I do?

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • **n8n version: 1.1.1
  • **Database (default: SQLite): postgresql-15.3.0
  • **n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • **Running n8n via (Docker, npm, n8n cloud, desktop app): kubernetes
  • Operating system:

Hey @Aydemir,

Welcome to the community :cake:

Can you share your configuration? It sounds like you might have 2 main instances running, It could also be worth upgrading from 1.1.1 to see if this is something that has already been resolved in one of the versions released since then.

Hi Jon,

I have two deployment. One deployment is n8n and other is n8n-worker. They have same environments as below:
config.json
{
“executions”: {
“pruneData”: “true”,
“pruneDataMaxAge”: 3760
}
}

      env:
        - name: N8N_CONFIG_FILES
          value: /n8n-config/config.json
        - name: QUEUE_BULL_REDIS_HOST
          value: test-n8n-redis-master
        - name: EXECUTIONS_MODE
          value: queue
        - name: QUEUE_BULL_REDIS_PASSWORD
          value: changeme
        - name: N8N_DISABLE_PRODUCTION_MAIN_PROCESS
          value: 'true'
        - name: DB_POSTGRESDB_HOST
          value: n8n-posgresql-postgresql-hl
        - name: DB_POSTGRESDB_PASSWORD
          value: changeme
        - name: DB_POSTGRESDB_USER
          value: postgres
        - name: DB_POSTGRESDB_PORT
          value: '5432'
        - name: DB_POSTGRESDB_DATABASE
          value: n8n
        - name: DB_TYPE
          value: postgresdb
        - name: N8N_EDITOR_BASE_URL
          value: 'https://test-n8n.spider.com.tr'
        - name: VUE_APP_URL_BASE_API
          value: 'https://test-n8n.spider.com.tr/'

Difference of these two deployment is n8n-worker deployment run as n8n --worker --concurrency=2

Hey @Aydemir,

I was expecting more config if you are using k8s, Cnan you share the full thing.

Hi Jon,

Remaining environments are about metric and logs. Here is:

    - name: N8N_DISABLE_PRODUCTION_MAIN_PROCESS
      value: 'false'
    - name: WEBHOOK_TUNNEL_URL
      value: 'https://test-n8n-wbhooks.spider.com.tr/'
    - name: N8N_PORT
      value: '5678'
    - name: N8N_PROTOCOL
      value: http
    - name: N8N_HOST
      value: localhost
    - name: N8N_ENCRYPTION_KEY
      value: e2e093a6e3112f02915d
    - name: WEBHOOK_URL
      value: 'https://test-n8n-wbhooks.spider.com.tr/'
    - name: GENERIC_TIMEZONE
      value: Europe/Istanbul
    - name: TZ
      value: Europe/Istanbul
    - name: N8N_METRICS
      value: 'true'
    - name: N8N_METRICS_INCLUDE_DEFAULT_METRICS
      value: 'true'
    - name: N8N_METRICS_INCLUDE_CACHE_METRICS
      value: 'true'
    - name: N8N_METRICS_INCLUDE_MESSAGE_EVENT_BUS_METRICS
      value: 'true'
    - name: N8N_METRICS_INCLUDE_WORKFLOW_ID_LABEL
      value: 'true'
    - name: N8N_METRICS_INCLUDE_NODE_TYPE_LABEL
      value: 'true'
    - name: N8N_METRICS_INCLUDE_CREDENTIAL_TYPE_LABEL
      value: 'true'
    - name: N8N_METRICS_INCLUDE_API_ENDPOINTS
      value: 'true'
    - name: N8N_METRICS_INCLUDE_API_PATH_LABEL
      value: 'true'
    - name: N8N_METRICS_INCLUDE_API_METHOD_LABEL
      value: 'true'
    - name: N8N_METRICS_INCLUDE_API_STATUS_CODE_LABEL
      value: 'true'
    - name: N8N_LOG_OUTPUT
      value: 'console,file'
    - name: N8N_LOG_FILE_LOCATION
      value: /logs/n8n.log
    - name: N8N_LOG_FILE_MAXSIZE
      value: '50'
    - name: N8N_LOG_FILE_MAXCOUNT
      value: '60'
    - name: N8N_LOG_LEVEL
      value: debug

image

and deployments are:

image

Hey @Aydemir,

If I run that it won’t start a k8s instance of n8n, It isn’t just the config for n8n I want but the full k8s config you are using that is spinning up your instances. Looking at your image though that has test-n8n-6487f7b76-hpkng and Ready 2/2 which to me would suggest that you have 2 out of 2 instances ready and that could be the issue.

Yes Jon, I put fluentbit sidecar inside n8n pod to send logs to elasticsearch. Thats why you see 2/2 but this was after I saw unexpecting behave. I want to create an alarm if one more cron occurs at same time. Thas why I put fluentbit sidecar inside n8n pod to send logs to elasticsearch. I think the main problem is something cause n8n operator (n8n deployment) to write one more record of same workflow in Redis and thats why worker picks the schedule trigger job and executes. You can ignore fluentbit sidecar.

I installed n8n from this helm chart => git clone GitHub - 8gears/n8n-helm-chart: A Kubernetes Helm chart for n8n a Workflow Automation Tool. Easily automate tasks across different services.
my values yaml template is :

n8n:
encryption_key: # n8n creates a random encryption key automatically on the first launch and saves it in the ~/.n8n folder. That key is used to encrypt the credentials before they get saved to the database.
defaults:

config:
executions:
pruneData: “true” # prune executions by default
pruneDataMaxAge: 3760 # Per defaut we store 1 year of history
database:
type: postgresdb # Type of database to use - Other possible types [‘sqlite’, ‘mariadb’, ‘mysqldb’, ‘postgresdb’] - default: sqlite
postgresdb:
database: n8n # PostgresDB Database - default: n8n
host: n8n-posgresql-postgresql-hl # PostgresDB Host - default: localhost
password: abc123xx # PostgresDB Password - default: ‘’
port: 5432 # PostgresDB Port - default: 5432
user: postgres # PostgresDB User - default: root
schema: public # PostgresDB Schema - default: public
executions:
process: own # In what process workflows should be executed - possible values [main, own] - default: own
timeout: -1 # Max run time (seconds) before stopping the workflow execution - default: -1
maxTimeout: 3600 # Max execution time (seconds) that can be set for a workflow individually - default: 3600
saveDataOnError: all # What workflow execution data to save on error - possible values [all , none] - default: all
saveDataOnSuccess: all # What workflow execution data to save on success - possible values [all , none] - default: all
saveDataManualExecutions: false # Save data of executions when started manually via editor - default: false
pruneData: false # Delete data of past executions on a rolling basis - default: false
pruneDataMaxAge: 336 # How old (hours) the execution data has to be to get deleted - default: 336
pruneDataTimeout: 3600 # Timeout (seconds) after execution data has been pruned - default: 3600
generic:
timezone: Europe/Istanbul # The timezone to use - default: America/New_York
extraEnv:
extraEnvSecrets: {}
persistence:
enabled: true
type: emptyDir # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
storageClass: “”
accessModes:
- ReadWriteOnce
size: 1Gi
replicaCount: 1

deploymentStrategy:
type: “Recreate”

image:
repository: n8nio/n8n
pullPolicy: IfNotPresent
tag: “1.1.1”

imagePullSecrets: []
nameOverride: “”
fullnameOverride: “”
serviceAccount:
create: true
annotations: {}
template
name: “”
podAnnotations: {}

podLabels: {}

podSecurityContext: {}
securityContext:
{}
lifecycle:
{}
command: []
service:
type: ClusterIP
port: 80
annotations: {}
workerResources:
{}

webhookResources:
{}

resources:
limits:
cpu: ‘1’
memory: 2Gi
requests:
cpu: 10m
memory: 128Mi
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
nodeSelector: {}

tolerations: []

affinity: {}

scaling:
enabled: true

worker:
count: 2
concurrency: 2

With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process but you enable the processing on a different Webhook instance.

See "webhook.enabled" means "webhook disabled" · Issue #39 · 8gears/n8n-helm-chart · GitHub for the full explanation.

webhook:
enabled: true
count: 1

redis:
host: test-n8n-redis-master
password: abc123xx

Bitnami Redis configuration

https://github.com/bitnami/charts/tree/master/bitnami/redis

redis:
enabled: true
architecture: standalone
auth:
enabled: true
password: abc123xx
master:
persistence:
enabled: true
existingClaim: “”
size: 2Gi

Hey @Aydemir,

Sadly that isn’t an official n8n configuration repository and it looks like some options might be out of date, For example the process option mentions own which shouldn’t be used from v1 and main is the option now.

I am pretty sure this is going to be an issue with your configuration and I would start with having just one main instance and not using queue mode for now to confirm that works then slowly start introducing more nodes once you are happy with it.

The error you are describing though is something I have seen when there are multiple main instances involved so it is always worth simplifying the install first.

Thanks Jon. Is there an official helm repository for n8n ? I used it for installation but after that I changed environments accordingly official documents. I sent you my environment variables. Are there any wrong setttings there?

Hey @Aydemir,

We don’t have an official helm chart at the moment but we do have different examples in our docs that you can base your install on.

Your settings looked ok from an n8n view apart from what I have already mentioned, I really would start out with a smaller install to make sure it works then scale out.

Thanks Jon, I will change from queue to regular mode and test again.

Hi Jon,

I did some changes to understand duplicate scheduler trigger cron. According to changes below
I removed n8n worker deployment.
I change execution mode from queue to regular
When deployment of n8n is 1 replica everything is fine ok.
When deployment of n8n is 2 replica I can see that two cron execution starts at same time.
When deployment of n8n is 3 replica I can see that three cron execution starts at same time.

Based on that, Is it possible to scale n8n without getting multpile execution of same workflow ? Or should I return back to queue mode again?

Hey @Aydemir,

We only support 1 main instance of n8n so if you have replicas I would expect you to see multiple runs of the same schedule. Right now you would only need 1 main instance of n8n and multiple workers / webhook workers to handle the load, You could then have a passive setup of the main instance that you could fire up if the main instance is offline.

We are working on improving this in the future.

Hi Jon,
I see, thanks again

Hi Jon,

What can you say about below log. I see that N8N main pod restarted because of that log.

2023-09-15T01:35:08.716Z | debug | Lazy Loading credentials and nodes from n8n-nodes-base “{\n credentials: 344,\n nodes: 431,\n file: ‘DirectoryLoader.js’,\n function: ‘loadAll’\n}”
2023-09-15T01:35:33.406Z | info |
Stopping n8n… “{ file: ‘start.js’, function: ‘stopProcess’ }”
2023-09-15T01:35:33.407Z | error | Error: There was an error shutting down n8n. “{ file: ‘ErrorReporterProxy.js’, function: ‘report’ }”
2023-09-15T01:35:33.407Z | error | TypeError: Cannot read properties of undefined (reading ‘removeAllQueuedWorkflowActivations’) “{ file: ‘ErrorReporterProxy.js’, function: ‘report’ }”

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.