MySQL Corrupting upgrading from 0.216.3 to 0.217.0

Describe the issue/error/question

After upgrading from 0.216.3 to 0.217.0 (or even with some newest version) database became corrupted.

What is the error message (if any)?

/usr/sbin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2e) [0x5628a03b741e]
/usr/sbin/mysqld(handle_fatal_signal+0x31b) [0x56289f7be08b]
/lib/x86_64-linux-gnu/ [0x7f9ca17fd140]
/usr/sbin/mysqld(dtuple_convert_big_rec(dict_index_t*, upd_t*, dtuple_t*)+0xb67) [0x5628a062c2e7]
/usr/sbin/mysqld(btr_cur_pessimistic_update(unsigned long, btr_cur_t*, unsigned long**, mem_block_info_t**, mem_block_info_t*, big_rec_t**, upd_t*, unsigned long, que_thr_t*, unsigned long, unsigned long, mtr_t*, btr_pcur_t*)+0x3cf) [0x5628a0513a5f]
/usr/sbin/mysqld(+0x339cc7c) [0x5628a07d8c7c]
/usr/sbin/mysqld(+0x339d3f2) [0x5628a07d93f2]
/usr/sbin/mysqld(row_undo_mod(undo_node_t*, que_thr_t*)+0xc9f) [0x5628a07dcbdf]
/usr/sbin/mysqld(row_undo_step(que_thr_t*)+0x52) [0x5628a07d8122]
/usr/sbin/mysqld(que_run_threads(que_thr_t*)+0x988) [0x5628a0761b78]
/usr/sbin/mysqld(+0x340ab1b) [0x5628a0846b1b]
/usr/sbin/mysqld(trx_rollback_or_clean_recovered(unsigned long)+0x35) [0x5628a0847815]
/usr/sbin/mysqld(trx_recovery_rollback_thread()+0x30) [0x5628a0847a80]
/usr/sbin/mysqld(std::thread::_State_impl<std::thread::_Invoker<std::tuple<Runnable, void (*)()> > >::_M_run()+0xa5) [0x5628a05707f5]
/usr/lib/x86_64-linux-gnu/ [0x7f9ca12bbb2f]
/lib/x86_64-linux-gnu/ [0x7f9ca17f1ea7]
/lib/x86_64-linux-gnu/ [0x7f9ca0fc7aef]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): Connection ID (thread ID): 0

The manual page at contains
information that should help you find out what is causing the crash.

This error seems related to this MySQL bug:

Any idea how to workaround this?

Information on your n8n setup

  • n8n version: 0.217.0 or newest version
  • Database you’re using: MySQL 8 provided by Scaleway
  • Running n8n via Docker

Some additional information: Upgrading from 0.216.3 to 0.217.0 (or even with a newest version) database became corrupted. At the moment we’re at stuck at 0.216.3 version because is the latest working version for us. We are not able to update it.

Hey @Matteo,

Welcome to the community :raised_hands:

What version of MySQL are you running? It looks like you have done some digging already I can see a link to a post about a MySQL bug when running in a cluster and I am not sure if we can actually do anything about that as we just run some pretty simple queries.

I would probably start again, Export all the workflows and credentials using the CLI, make a new database then start up n8n to do the initial setup for user management again then import the worklfows and credentials and try the upgrade again.

Hello Jon! Our MySQL version is:

Server version: 8.0.21 MySQL Community Server - GPL

Yep, we already digging a lot but we didn’t find any working solution. Honestly we got this issue also trying to use a standalone (not cluster) mysql configuration. Seems like there’s some specific queries introduced on > 0.217.x version that trigger this bug (more likely some kind of migration query). Would be possible to have a list of these specific queries? Or could you please point me to a right place where to find them? Or maybe could be worth also to try to do some MySQL tuning? Do you have any idea on which parameters we can work on?

Hey @Matteo,

I don’t think MySQL tuning would help if you are hitting that error, If you are also hitting that same error on a standalone instance it seems unlikely that the issue you have found is related as well as it seems to only be for clusters so there could be more to this.

We do have some migration queries which can all be found here but we have other users on MySQL not reporting this issue as well so I suspect there could also be something environmental at play.

How many execution records do you have in your database? That is typically where we see issues so it could be a case of clearing those down but to be honest I still think the best option to export, make a new db, import and see if that has the same issue.

Ok, just another question: Since I read on several messages that seems you are going to dismiss MySQL support in the future, could you please confirm that what I’ve read is the truth?

I’m referring to this message:

Because if we are going to export/import from the UI to bypass this MySQL crash, maybe is worth to migrate to PostgreSQL now, instead wait the decommissioning. Many thanks.

Hey @Matteo,

We have talked internally about dropping support for MySQL in the future but no decision has been made yet. If it was me I would use Postgres just to be safe, Which oddly enough was what I did when I migrated my instances aware from SQLite.

I did do a test with MySQL 8 earlier today though and I was able to upgrade without any issues, I was going to think of anything we can try to debug a bit more. I know we have an option to debug database connections / queries so that could be worth doing.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.