I’m not really sure what happens, but everyday my team’s workflows are just gone. We investigated a bit and it’s probably a migration that is corrupting the DB, but we don’t know how to fix it. We are manually recreating the workflows from backup every day, our n8n is currently in latest version. We need help fast.
Does it happen when you restart the container?
If that’s the case you likely don’t have your docker volumes setup correctly
Please share details of configuration
Hey liam, Thanks for your attention!
If I run a docker compose down and then up, everything starts normally.
But at some point it tries to run the migrations again and completely breaks everything.
In my docker-compose.yml, under the n8n-worker service, I added:
deploy:
replicas: 2
But my Docker is not running in Swarm mode.
Could this be the issue?
could you share your complete compose file? Are you using queue mode?
n8n-editor:
image: n8nio/n8n:latest
container_name: n8n-editor
mem_limit: 1g
cpus: 0.5
volumes:
- ./n8n-data:/home/node/.n8n
ports: - xxxx:xxxx
depends_on: - postgres-n8n
- redis
networks: - n8n-queue
restart:
always
environment:
N8N_ALLOW_ORIGIN: ‘*’
N8N_EDITOR_BASE_URL: x
WEBHOOK_URL: x
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: x
DB_POSTGRESDB_PORT: xxxx
DB_POSTGRESDB_DATABASE: x
DB_POSTGRESDB_USER: x
DB_POSTGRESDB_PASSWORD: x
QUEUE_BULL_REDIS_HOST: x
QUEUE_BULL_REDIS_PORT: x
N8N_RUNNERS_ENABLED: true
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: true
EXECUTIONS_MODE: queue
GENERIC_TIMEZONE: America/Sao_Paulo
N8N_COMMUNITY_PACKAGES_ALLOW: true
N8N_DEFAULT_BINARY_DATA_MODE: filesystem
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS: true
N8N_TRUST_PROXY: true
N8N_API_RATE_LIMIT: false
N8N_PROXY_HOPS: “1”
N8N_PUSH_BACKEND: “websocket”
And yes, we are using queue mode.
Can you share the compose for the workers too?
When you say it starts normally, you mean the workflows are there when it first restarts but then disappear?
Are you able to check the logs when this happens? Change the log level if needed
n8n-workers:
image: n8nio/n8n:latest
mem_limit: 1.5g
cpus: 0.7
volumes:
- ./n8n-data:/home/node/.n8n
depends_on:
- postgres-n8n
- redis
networks:
- n8n-queue
restart: always
environment:
N8N_ALLOW_ORIGIN: '\*'
N8N_EDITOR_BASE_URL: x
WEBHOOK_URL: x
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: x
DB_POSTGRESDB_PORT: x
DB_POSTGRESDB_DATABASE: x
DB_POSTGRESDB_USER: x
DB_POSTGRESDB_PASSWORD: x
QUEUE_BULL_REDIS_HOST: x
QUEUE_BULL_REDIS_PORT: x
N8N_RUNNERS_ENABLED: true
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: true
EXECUTIONS_MODE: queue
GENERIC_TIMEZONE: America/Sao_Paulo
N8N_COMMUNITY_PACKAGES_ALLOW: true
N8N_DEFAULT_BINARY_DATA_MODE: filesystem
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS: true
N8N_TRUST_PROXY: true
N8N_API_RATE_LIMIT: false
N8N_PROXY_HOPS: "1"
N8N_PUSH_BACKEND: "websocket"
command: worker
There’s a moment when n8n runs migrations, and as a result it deletes tables/resets the database. Because of that, I have to delete the database and rebuild the n8n-editor service.
Logs:
n8n-editor | Migrations in progress, please do NOT stop the process.
n8n-editor | Starting migration InitialMigration1587669153312
n8n-editor | Finished migration InitialMigration1587669153312
n8n-editor | Starting migration WebhookModel1589476000887
n8n-editor | Finished migration WebhookModel1589476000887
n8n-editor | Starting migration CreateIndexStoppedAt1594828256133
n8n-editor | Finished migration CreateIndexStoppedAt1594828256133
n8n-editor | Starting migration MakeStoppedAtNullable1607431743768
n8n-editor | Finished migration MakeStoppedAtNullable1607431743768
n8n-editor | Starting migration AddWebhookId1611144599516
n8n-editor | Finished migration AddWebhookId1611144599516
n8n-editor | Starting migration CreateTagEntity1617270242566
n8n-editor | Migration “CreateTagEntity1617270242566” failed, error: foreign key constraint “FK_31140eb41f019805b40d0087449” cannot be implemented
n8n-editor | There was an error running database migrations
n8n-editor | foreign key constraint “FK_31140eb41f019805b40d0087449” cannot be implemented
Just to emphasize we use this:
deploy:
replicas: 2
@liam Any idea what might be causing this problem?
I removed it from docker-compose.yml
deploy:
replicas: 2
Even after removing the replicas, I used the command:
docker-compose up -d --scale worker=2
To have 2 workers with 10 concurrency, But the error still occurred, but this time it was different because it occurred after 2 days.
Maybe it’s the number of concurrency per worker? Maybe it’s the number of workers?
Docker stats:
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.
