Describe the bug
I tried making a few workflows using webhook as a starting point. When I execute workflows manually, everything works. When executing using webhook test mode, everything works. The issue arises when I try to go into production mode. It just doesn’t fire and throws an error. When trying to see the error in Workflow Executions it says Failed and: “Workflow Execution finished with an error. Unable to find data of execution “{number}” in database. Aborting execution.” Thus I can’t even check what’s wrong.
Expected behavior
It should fire with the needed data.
Environment (please complete the following information):
- OS: Ubuntu Linux 20.04.6 LTS
- n8n Version 1.8.2
- Node.js Version v10.19.0
- Database system postgres 12.16 and redis connected
- Operation mode queue
- Installed using docker
Additional context
Postgres volume: /var/lib/postgresql/data
Setup on subdomain, but didn’t do anything aditional with SSL
docker postgres logs:
2023-10-19 17:17:48.012 UTC [40271] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-10-19 17:17:48.246 UTC [40274] ERROR: duplicate key value violates unique constraint “pk_workflow_statistics”
2023-10-19 17:17:48.246 UTC [40274] DETAIL: Key (“workflowId”, name)=(1SOPOpWbvkYCgKKO, data_loaded) already exists.
2023-10-19 17:17:48.246 UTC [40274] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-10-19 17:17:48.424 UTC [40271] ERROR: duplicate key value violates unique constraint “pk_workflow_statistics”
2023-10-19 17:17:48.424 UTC [40271] DETAIL: Key (“workflowId”, name)=(1SOPOpWbvkYCgKKO, data_loaded) already exists.
2023-10-19 17:17:48.424 UTC [40271] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-10-19 17:17:50.169 UTC [40271] ERROR: duplicate key value violates unique constraint “pk_workflow_statistics”
2023-10-19 17:17:50.169 UTC [40271] DETAIL: Key (“workflowId”, name)=(1SOPOpWbvkYCgKKO, data_loaded) already exists.
database records:
n8n_db=# SELECT * FROM public.workflow_statistics WHERE “workflowId” = ‘1SOPOpWbvkYCgKKO’ AND name = ‘data_loaded’;
count | latestEvent | name | workflowId
-------±------------------------±------------±-----------------
1 | 2023-10-15 12:09:25.496 | data_loaded | 1SOPOpWbvkYCgKKO
(1 row)
yml file:
services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile
n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_DATABASE=n8n_db
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_USER=n8n_user
- DB_POSTGRESDB_PASSWORD=my_redaced_password
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
postgres:
image: postgres:latest
restart: always
environment:
- POSTGRES_DB=n8n_db
- POSTGRES_USER=n8n_user
- POSTGRES_PASSWORD=my_redacted_password
volumes:
- postgres_data:/var/lib/postgresql/data # This line is for data persistence
redis:
image: redis:latest
restart: always
ports:
- "6379:6379"
n8n_worker:
image: docker.n8n.io/n8nio/n8n
restart: always
command: worker
environment:
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
volumes:
caddy_data:
external: true
n8n_data:
external: true
postgres_data:
caddy:
myredactedsubdomain.myredacteddomain.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}
tried solving this with gpt4, had me run this, and still nothing after restarting docker:
ALTER TABLE public.workflow_statistics DROP CONSTRAINT pk_workflow_statistics;