ScdeuleTrigger cron crashes and fails

Describe the problem/error/question

Workflow execution finished with error.

What is the error message (if any)?

[{“resultData”:“1”},{“error”:“2”,“runData”:“3”},{“message”:“4”,“stack”:“5”},
{“Schedule Trigger”:“6”,“HTTP Request”:“7”},
“Unable to find data of execution "952931" in database.
Aborting execution.”,“Error: Unable to find data of execution "952931" in database.
Aborting execution.\n at Queue.onFailed (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/job.js:516:18)\n
at Queue.emit (node:events:525:35)\n
at Queue.emit (node:domain:489:12)\n
at Object.module.exports.emitSafe (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/utils.js:50:20)\n
at EventEmitter.messageHandler (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/queue.js:474:15)\n
at EventEmitter.emit (node:events:513:28)\n
at EventEmitter.emit (node:domain:489:12)\n
at DataHandler.handleSubscriberReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:80:32)\n
at DataHandler.returnReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:47:18)\n
at JavascriptRedisParser.returnReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:21:22)”,[“8”],[“9”],{“startTime”:0,“executionTime”:0,“source”:“10”,“executionStatus”:“11”},
{“startTime”:0,“executionTime”:0,“source”:“12”,“executionStatus”:“11”},[null],“unknown”,[null]]

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • **n8n version:1.4.1
  • **Database (default: SQLite):postgresql
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):main
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):kubernetes
  • Operating system:

Hey @Aydemir,

I am going to need some more information on this one as I use that node for a lot of my workflows without issue :slight_smile:

Where are you seeing this error?
Does the UI show any message?
Does this always happen?
Does this error only happen if you try and cancel a running workflow in the UI?
Have you tried setting up the latest version of n8n in a staging environment and running the same workflow to see if this is something that has already been resolved?

Hi Jon,

We use 1.12.2 queue mod.
Error is:

[{“resultData”:“1”},{“error”:“2”,“runData”:“3”},{“message”:“4”,“stack”:“5”},{},“Unable to find data of execution "972191" in database.
Aborting execution.”,“Error: Unable to find data of execution "972191" in database.
Aborting execution.\n
at Queue.onFailed (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/job.js:516:18)\n
at Queue.emit (node:events:517:28)\n at Queue.emit (node:domain:489:12)\n
at Object.module.exports.emitSafe (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/utils.js:50:20)\n
at EventEmitter.messageHandler (/usr/local/lib/node_modules/n8n/node_modules/bull/lib/queue.js:474:15)\n
at EventEmitter.emit (node:events:517:28)\n at EventEmitter.emit (node:domain:489:12)\n
at DataHandler.handleSubscriberReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:80:32)\n
at DataHandler.returnReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:47:18)\n
at JavascriptRedisParser.returnReply (/usr/local/lib/node_modules/n8n/node_modules/ioredis/built/DataHandler.js:21:22)”]

Where are you seeing this error?
On Ui and I found error log on database
Does the UI show any message?
yes. You can see it from screenshot
Does this always happen?
Rarely
Does this error only happen if you try and cancel a running workflow in the UI?
No
Have you tried setting up the latest version of n8n in a staging environment and running the same workflow to see if this is something that has already been resolved?

Are your workers talking to the main database and if you have multiple workers are they on the same version?

Can you also confirm that you only have one main n8n instance running.

All of the deployments are 1.12.2 version.

We have only one worker (1 pod)
I have only one main n8n instance.

I found that the failed workflow executions logs are coming from main n8n instance. But on the other hand the successful executions logs are coming from n8n worker. May be main n8n or worker n8n configurations are wrong. Here is the logs from elastic

Failed execution log:

Successful execution log:

It seems that main n8n trying to execute workflow but it shouldnt do that.

My worker deployment is:

kind: Deployment
apiVersion: apps/v1
metadata:
name: prod-n8n-worker
namespace: prod-n8n
labels:
app.kubernetes.io/instance: prod-n8n
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: n8n
app.kubernetes.io/version: 0.236.3
helm.sh/chart: n8n-0.10.0
annotations:
deployment.kubernetes.io/revision: ‘12’
meta.helm.sh/release-name: prod-n8n
meta.helm.sh/release-namespace: prod-n8n
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: prod-n8n
app.kubernetes.io/name: n8n
app.kubernetes.io/type: worker
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: prod-n8n
app.kubernetes.io/name: n8n
app.kubernetes.io/type: worker
annotations:
checksum/config: 35ebc75477cb06daa7b587880948a99fa4f7b91f2c21e106bafd523326478e6c
spec:
restartPolicy: Always
serviceAccountName: prod-n8n
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
securityContext: {}
containers:
- name: fulentbit-sidecar
image: ‘fluent/fluent-bit:2.1.8’
resources:
limits:
cpu: ‘1’
memory: 2Gi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: fluentbit-config
mountPath: /fluent-bit/etc/fluent-bit.conf
subPath: fluent-bit.conf
- name: n8n-log
mountPath: /logs
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- resources: {}
terminationMessagePath: /dev/termination-log
name: n8n
command:
- n8n
env:
- name: N8N_CONFIG_FILES
value: /n8n-config/config.json
- name: QUEUE_BULL_REDIS_HOST
value: prod-n8n-redis-master
- name: EXECUTIONS_MODE
value: queue
- name: QUEUE_BULL_REDIS_PASSWORD
value: xxxxxxxx
- name: N8N_DISABLE_PRODUCTION_MAIN_PROCESS
value: ‘true’
- name: DB_POSTGRESDB_HOST
value: n8n-postgresql-postgresql-ha-pgpool
- name: DB_POSTGRESDB_PASSWORD
value: xxxxxxxxxx
- name: DB_POSTGRESDB_USER
value: postgres
- name: DB_POSTGRESDB_PORT
value: ‘5432’
- name: DB_POSTGRESDB_DATABASE
value: n8n
- name: DB_TYPE
value: postgresdb
- name: N8N_EDITOR_BASE_URL
value: ‘https://wfautomation.xxxx
- name: VUE_APP_URL_BASE_API
value: ‘https://wfautomation.xxxx
- name: N8N_DISABLE_PRODUCTION_MAIN_PROCESS
value: ‘false’
- name: N8N_PORT
value: ‘5678’
- name: N8N_PROTOCOL
value: http
- name: N8N_HOST
value: localhost
- name: N8N_ENCRYPTION_KEY
value: xxxxxxxxxxx
- name: WEBHOOK_URL
value: ‘https://wfautomation-webhooks.xxx
- name: GENERIC_TIMEZONE
value: Europe/Istanbul
- name: N8N_LOG_OUTPUT
value: ‘console,file’
- name: N8N_LOG_FILE_LOCATION
value: /logs/n8n.log
- name: N8N_LOG_FILE_MAXSIZE
value: ‘50’
- name: N8N_LOG_FILE_MAXCOUNT
value: ‘60’
- name: N8N_LOG_LEVEL
value: debug
- name: restart
value: ‘1’
securityContext: {}
ports:
- name: http
containerPort: 5678
protocol: TCP
imagePullPolicy: IfNotPresent
volumeMounts:
- name: data
mountPath: /root/.n8n
- name: n8n-log
mountPath: /logs
- name: config-volume
mountPath: /n8n-config
terminationMessagePolicy: File
image: ‘n8nio/n8n:1.12.2’
args:
- worker
- ‘–concurrency=2’
serviceAccount: prod-n8n
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prod-n8n
defaultMode: 420
- name: n8n-log
emptyDir: {}
- name: fluentbit-config
configMap:
name: fluentbit-config
items:
- key: fluent-bit.conf
path: fluent-bit.conf
defaultMode: 420
dnsPolicy: ClusterFirst
strategy:
type: Recreate
revisionHistoryLimit: 10
progressDeadlineSeconds: 600

My main n8n deployment is

kind: Deployment
apiVersion: apps/v1
metadata:
name: prod-n8n
namespace: prod-n8n
labels:
app.kubernetes.io/instance: prod-n8n
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: n8n
app.kubernetes.io/version: 0.236.3
helm.sh/chart: n8n-0.10.0
annotations:
deployment.kubernetes.io/revision: ‘14’
meta.helm.sh/release-name: prod-n8n
meta.helm.sh/release-namespace: prod-n8n
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: prod-n8n
app.kubernetes.io/name: n8n
app.kubernetes.io/type: master
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: prod-n8n
app.kubernetes.io/name: n8n
app.kubernetes.io/type: master
annotations:
checksum/config: 35ebc75477cb06daa7b587880948a99fa4f7b91f2c21e106bafd523326478e6c
spec:
restartPolicy: Always
serviceAccountName: prod-n8n
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
securityContext: {}
containers:
- name: fulentbit-sidecar
image: ‘fluent/fluent-bit:2.1.8’
resources:
limits:
cpu: ‘1’
memory: 2Gi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: fluentbit-config
mountPath: /fluent-bit/etc/fluent-bit.conf
subPath: fluent-bit.conf
- name: n8n-log
mountPath: /logs
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- resources:
limits:
cpu: ‘1’
memory: 2Gi
requests:
cpu: 100m
memory: 256Mi
readinessProbe:
httpGet:
path: /healthz
port: http
scheme: HTTP
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
lifecycle: {}
name: n8n
livenessProbe:
httpGet:
path: /healthz
port: http
scheme: HTTP
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
env:
- name: N8N_CONFIG_FILES
value: /n8n-config/config.json
- name: QUEUE_BULL_REDIS_HOST
value: prod-n8n-redis-master
- name: EXECUTIONS_MODE
value: queue
- name: QUEUE_BULL_REDIS_PASSWORD
value: xxxxxx
- name: N8N_DISABLE_PRODUCTION_MAIN_PROCESS
value: ‘true’
- name: DB_POSTGRESDB_HOST
value: n8n-postgresql-postgresql-ha-pgpool
- name: DB_POSTGRESDB_PASSWORD
value: xxxxxx
- name: DB_POSTGRESDB_USER
value: postgres
- name: DB_POSTGRESDB_PORT
value: ‘5432’
- name: DB_POSTGRESDB_DATABASE
value: n8n
- name: DB_TYPE
value: postgresdb
- name: N8N_EDITOR_BASE_URL
value: ‘https://wfautomation.xxxx
- name: VUE_APP_URL_BASE_API
value: ‘https://wfautomation.xxxx/
- name: N8N_DISABLE_PRODUCTION_MAIN_PROCESS
value: ‘true’
- name: N8N_PORT
value: ‘5678’
- name: N8N_PROTOCOL
value: http
- name: N8N_HOST
value: localhost
- name: N8N_ENCRYPTION_KEY
value: c6e079a6b3117f03915c
- name: WEBHOOK_URL
value: ‘https://wfautomation-webhooks.xxxx/
- name: N8N_LOG_LEVEL
value: debug
- name: GENERIC_TIMEZONE
value: Europe/Istanbul
- name: N8N_METRICS
value: ‘true’
- name: N8N_METRICS_INCLUDE_DEFAULT_METRICS
value: ‘true’
- name: N8N_METRICS_INCLUDE_CACHE_METRICS
value: ‘true’
- name: N8N_METRICS_INCLUDE_MESSAGE_EVENT_BUS_METRICS
value: ‘true’
- name: N8N_METRICS_INCLUDE_WORKFLOW_ID_LABEL
value: ‘true’
- name: N8N_METRICS_INCLUDE_NODE_TYPE_LABEL
value: ‘true’
- name: N8N_METRICS_INCLUDE_CREDENTIAL_TYPE_LABEL
value: ‘true’
- name: N8N_METRICS_INCLUDE_API_ENDPOINTS
value: ‘true’
- name: N8N_METRICS_INCLUDE_API_PATH_LABEL
value: ‘true’
- name: N8N_METRICS_INCLUDE_API_METHOD_LABEL
value: ‘true’
- name: N8N_METRICS_INCLUDE_API_STATUS_CODE_LABEL
value: ‘true’
- name: N8N_LOG_OUTPUT
value: ‘console,file’
- name: N8N_LOG_FILE_LOCATION
value: /logs/n8n.log
- name: N8N_LOG_FILE_MAXSIZE
value: ‘50’
- name: N8N_LOG_FILE_MAXCOUNT
value: ‘60’
securityContext: {}
ports:
- name: http
containerPort: 5678
protocol: TCP
imagePullPolicy: IfNotPresent
volumeMounts:
- name: data
mountPath: /root/.n8n
- name: n8n-log
mountPath: /logs
- name: config-volume
mountPath: /n8n-config
terminationMessagePolicy: File
image: ‘n8nio/n8n:1.12.2’
serviceAccount: prod-n8n
volumes:
- name: data
emptyDir: {}
- name: n8n-log
emptyDir: {}
- name: config-volume
configMap:
name: prod-n8n
defaultMode: 420
- name: fluentbit-config
configMap:
name: fluentbit-config-manager
items:
- key: fluent-bit.conf
path: fluent-bit.conf
defaultMode: 420
dnsPolicy: ClusterFirst
strategy:
type: Recreate
revisionHistoryLimit: 10
progressDeadlineSeconds: 600

and my config.json is that

config.json
{
“executions”: {
“pruneData”: “true”,
“pruneDataMaxAge”: 3760
}
}

main n8n run command: n8n

worker n8n run command: n8n worker –concurrency=2

:thinking: That is interesting, I am not sure what is going there. I thought it was the main instance that actually handles scheduled executions.

I am not that familiar with k8s and it could just be the formatting but shouldn’t

volumeMounts:
- name: data
mountPath: /root/.n8n

be

volumeMounts:
- name: data
mountPath: /home/node/.n8n

You also have N8N_DISABLE_PRODUCTION_MAIN_PROCESS set twice although that shouldn’t cause an issue.

Does the main instance show any other errors? @krynble do you have any thoughts on this?

I see that n8n log has error lines like that.

/logs $ cat n8n.log | grep -i error
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:56:19.514Z”}}
{“level”:“error”,“message”:“Problem with execution 968418: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:56:19.515Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:56:19.515Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:56:43.854Z”}}
{“level”:“error”,“message”:“Problem with execution 968430: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:56:43.855Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:56:43.855Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968455" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.339Z”}}
{“level”:“error”,“message”:“Problem with execution 968455: Unable to find data of execution "968455" in database. Aborting execution… Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.339Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968455" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.339Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968456" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.347Z”}}
{“level”:“error”,“message”:“Problem with execution 968456: Unable to find data of execution "968456" in database. Aborting execution… Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.347Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968456" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.347Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968457" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.351Z”}}
{“level”:“error”,“message”:“Problem with execution 968457: Unable to find data of execution "968457" in database. Aborting execution… Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.351Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968457" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T19:57:08.351Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968806" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.192Z”}}
{“level”:“error”,“message”:“Problem with execution 968806: Unable to find data of execution "968806" in database. Aborting execution… Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.192Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968806" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.192Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.312Z”}}
{“level”:“error”,“message”:“Problem with execution 968811: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.312Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.313Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.317Z”}}
{“level”:“error”,“message”:“Problem with execution 968812: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.317Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.317Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.321Z”}}
{“level”:“error”,“message”:“Problem with execution 968807: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.321Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.321Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.326Z”}}
{“level”:“error”,“message”:“Problem with execution 968810: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.326Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:09:04.326Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:12:49.863Z”}}
{“level”:“error”,“message”:“Problem with execution 968878: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:12:49.863Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:12:49.863Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:15:34.548Z”}}
{“level”:“error”,“message”:“Problem with execution 968903: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:15:34.549Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:15:34.549Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968910" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:16:14.102Z”}}
{“level”:“error”,“message”:“Problem with execution 968910: Unable to find data of execution "968910" in database. Aborting execution… Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:16:14.102Z”}}
{“level”:“error”,“message”:“Error: Unable to find data of execution "968910" in database. Aborting execution.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:16:14.102Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:38:04.678Z”}}
{“level”:“error”,“message”:“Problem with execution 969366: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:38:04.678Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T20:38:04.678Z”}}
{“level”:“error”,“message”:“Error: Cannot read properties of null (reading ‘data’)”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T21:05:07.092Z”}}
{“level”:“error”,“message”:“Problem with execution 969903: Cannot read properties of null (reading ‘data’). Aborting.”,“metadata”:{“file”:“LoggerProxy.js”,“function”:“error”,“timestamp”:“2023-10-31T21:05:07.092Z”}}

Hello @Aydemir

First I’d like to ask whether this is happening with all executions or just a few.

The error you reported (about execution data not being found in the database) happens if you have a database inconsistency; the execution creation involves 2 steps: metadata creation + actual data creation.

This error happens when the latter does not exist.

Is there any chance your database is unable to keep up and might be dropping connections? Do you have any logs in your postgres setup?

If you are running too many simultaneous executions, the database becomes a bottleneck and might fail to write part of the data, so I’d recommend adding more coal to your database, if possible.

I think problem is on postgresql too. I installed bitnami postgresql with 3 node HA and two pgpool node in front of postgresql nodes in openshift. N8N uses pgpool for db transactions. I think somehow this setup cause n8n to fail. Postgresql version is 15.4. I dont want to use single postgresql container since its production. I will try to find what else I can. We have only 4 active workflows so I dont think too many executions cause postgresql bottleneck.
Thanks alot