Migration failed after upgrading n8n to 0.213.0

Describe the issue/error/question

We’re having a migration error after upgrading n8n from 0.208.1 to 0.213.0

What is the error message (if any)?

Initializing n8n process

Migrations in progress, please do NOT stop the process.

Migration "DeleteExecutionsWithWorkflows1673268682475" failed, error: Can't write; duplicate key in table '#sql-24b6_b4770'

Please share the workflow

(Select the nodes and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow respectively)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 0.213.0
  • Database you’re using (default: SQLite): MySQL
  • Running n8n with the execution process [own(default), main]: own (default)
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

@Daniel_F_Jaramillo are you able to upgrade to 0.210.2 ?
Also, can you please check how many rows does the execution_entity table have?

My migration also failed but I found that deleting all in table execution_entity helps (although you will not have any history)

I even did this automation …
Every evening delete all executions that ended successfully.

DELETE FROM execution_entity WHERE finished = 1;

Hey @shlomi,

In your case have you thought about enabling the pruning option? That will then automate the deletion of data after a time that has been set.

I would add that for this one while that may fix it we would like to get to the bottom of it so we can prevent it from happening in the future.

Hi @netroy. We were able to upgrade to 0.210.2, in the execution_entity we have 121494 rows. Should we clean the execution_entity table?

Hi @Jon
True. but i want to delete only executions who are successful. i dont what to delete errors so i can debug if necessary.

my database has grown to 3GB!.. (I guess it has to do with the amount of emails (Email trigger) that go through the system.)

1 Like

If everything works, I don’t think it’s necessary

Same issue here! :frowning:

Hey @pbdco,

Which version are you upgrading from, How many records are in the execution_entity table and which database are you using?

Hi @Jon

Upgrading from v0.200.1
Records in execution_entity: 1.300.000+ records (equivalent to ~22GB)
DB: postgresql:11.13.0-debian-10-r58

After updating the image version to 0.213.0, the n8n pod was in crash loop back off state and restarting continuously. The table execution_entity was completely unresponsive during the upgrade, it wasn’t possible to even do a select on that table, while all other tables were ok.

I ended downgrading/upgrading to v0.210.2, without any problems.

Hey @pbdco,

Thanks for that, It looks like it may just need a really long time to process if you have a lot of a lot data. What I would do is enable the data pruning options in n8n so that older data is removed unless you need to keep it for compliance.

You can of course also follow the process above of manually deleting the data.

Hi @Jon I truncated the table execution_entity, but I keep getting the same error upgrading to 0.213.0, It’s likely to be because we have zero downtime, and we keep getting executions on the table, should we stop n8n process, clean the table and upgrade?

I don’t think you can do an upgrade without any downtime, The service would have to be restarted at some point which would kick off the migrations unless you are doing something crazy like pointing 2 instances at the same database and doing the upgrade on one which would cause other issues.

@jon why does the upgrade to 0.210.2 worked ok while the v0.213.0 is having these troubles? Is it any particular DB migration going on on this upgrade?

As a suggestion: In kubernetes scenario, the fact that there can’t be any other n8n pods running during the upgrade it should be considered beforehand because it implies downtime.
If this is really the cause in version 0.213.0, it would be a great idea to include a notice in the release notes in such cases, so we can take into consideration the downtime factor in order to decide the rolling update or deployment strategy beforehand.

Thank you very much for your kind support, as always! :pray:

Hey @pbdco,

I have not tried to run n8n in k8s but assuming you are using scaling mode with workers the updates should be ok as I would assume the main instance gets updated first but as the migration in 0.211.0 changes a data type that could be why there is an issue with this one normally it would be a very quick change and nothing would notice but as this is a larger change than normal it is taking a bit longer…

That is my theory anyway I am sure @netroy might have some thoughts on thsi as well.

FYI: What I did at the end in order to be able to upgrade to 0.213.0 is:

1- Upgrade from v0.200.1 to 0.210.2
2- Backup & Truncated table execution_entity
3- Upgrade from 0.210.2 to 0.211.2
4- Upgrade from 0.211.2 to 0.213.0 :checkered_flag: