N8n won't restart after PostgreSQL database update, error is migration failed

Describe the issue/error/question

I’m running n8n on:

  • self-hosted mode on aws with Fargate (docker)
  • using RDS Postgres and EFS for storing n8n data.
  • version is latest.

The rds instance got restarted after an update while the n8n app was running, since then the n8n app keeps crashing at application start-up.

What is the error message (if any)?

Migration "DeleteExecutionsWithWorkflows1673268682475" failed, error: relation "execution_entity" does not exist.

Running a docker container on my laptop while using the same database gives me the same error message.

I’ve looked into the database and the relation still exists, I’ve also looked in the migrations table but I won’t find the DeleteExecutionsWithWorkflows1673268682475 line in it.

Would someone have a clue on how to resolve this without losing any data ?
Many thanks

Hey @jc38,

Welcome to the community :tada:

That should be resolved now, Can you check using the latest release.

1 Like

Hey @Jon

Worked like a charm, many thanks !

2 Likes

hey @Jon running into the similar problem using the latest 0.213.0 (2023-01-27) on a postgres DB. We already have

  74         - name: EXECUTIONS_DATA_PRUNE
  75           value: "true"
  76         - name: EXECUTIONS_DATA_MAX_AGE
  77           value: "186"

2023-02-01T15:03:59.643Z | info     | Initializing n8n process "{ file: 'start.js', function: 'run' }"
2023-02-01T15:04:00.232Z | warn     | Migrations in progress, please do NOT stop the process. "{ file: 'migrationHelpers.js', function: 'logMigrationStart' }"
2023-02-01T15:04:00.233Z | debug    | Starting migration DeleteExecutionsWithWorkflows1673268682475 "{ file: 'migrationHelpers.js', function: 'logMigrationStart' }"

Any ideas how to recover from this?

Quick update on the above, we had a massive 300Gb execution_entity table with workflows that never terminated. the migration was stuck trying to apply schema changes. We’re investigating whether the non terminating nodes are issues on our end (ie node setup) or something deeper. If you’re facing similar problems, trying truncating your execution_entity (you will loose execution history) and continue with the migration