Unidentied error at MySQL level


I have updated my n8n version to 0.145.0.

I have created a task that reads multiple rss files (18 files with 70k rows combined). Later, those entries are processed to generate a final CSV, that will be uploaded to an SFTP server.

This big task causes a JS memory leak that crashes n8n.

Now, I have splitted big task in three subtasks, doing next steps:

  • Reading first 9 rss and generate the csv files.
  • Reading last 9 rss and generate the csv files.
  • Concat all generated csv files into one big file and upload it to a SFTP.

I am using Mysql and I get next errors when executing all subtasks:

  • Failed saving execution data to DB on execution ID 1299 (3 times)
  • Warning: got packets out of order. Expected 3 but received 2

Any idea about the reason of this failure?
After execution, executed subtasks appear in Unknown state.

Thank you!

Hey @Miquel_Colomer,

How are you sending the data to MySQL is it in a loop or in one go?

Hi @Jon,

Reading and generating CSV files are inside a SplitInBatches loop.

The last task only uses an Execute node (terminal) that creates the final CSV and uploads it by SFTP.

Hope this helps.

Hey @Miquel_Colomer,

Is that mysql insert / update inside that split in batches loop then? I wonder if a slight pause with the wait node would work around it.

Another possible option that I think I would try would be to use the MySQL infile option on the query and just pass in the split file (or attempt the entire file).

Hi @Jon

MySQL is used as a core database for n8n (not as a node).

I have tested it with wait nodes but unknown status was a recurring issue.

Separating in subtasks makes no difference as well.

Any idea? Use another database? Like mongo/postgres?

Hi @Miquel_Colomer,

That is not what I expected, When you said you was using MySQL I had assumed you were using it to save the data rather than using it for the service itself.

In that case as MySQL is sending the error back it could be worth looking into some MySQL tuning and checking the database logs to see if anything jumps out. Is it an error that can be easily reproduced like if someone else had the workflows would that be enough?

I would be tempted to start by tweaking max_allowed_packet in MySQL to see if that is causing the issue.

Yes, it was my fault when detailing task. Apologies about that.
Thank you for your feedback @Jon.

I have updated the max_allowed_packet for [mysqld] section to 256M

By the way, error only happens when task is executed from cron.
When task is executed manually, there is no error at n8n or task level.

That is a bit odd, I did some digging online and in my old notes for other products and I have had to go up to 2G depending on data size.

Replaced with 2G. I will let you know if issue is gone.

Thank you again!