Big XLSX files to DB stuck on reading bin file

Hi there!
I have a problem constructing a flow.
The situation is as follows:

  • dozens of xlsx files, each of them 80-120MB and 800-1000k rows
  • MariaDB database

The optimal usage scenario looks like this: after uploading the file to the correct network folder, n8n uploads the contents of the file to a specific table in the database.

This is where the problem arises. The workflow I created works fine with CSV files (pre-converted xlsx files), it also works directly with xlsx files <200k rows (all <40sec).
When I run the test with the target xlsx file (about 80MB), the CPU usage jumps to 100% and stays at that level for hours without any effect.

I am running out of ideas and have no idea where else to look for a solution.

The n8n is self-hosted on an i5 7400 + 40G of RAM machine. Single CPU core usage is ~100% at runtime, RAM usage < 3G

Information on your n8n setup

  • n8n version: 1.7.62
  • Database: SQLite
  • Running n8n via Docker
  • n8n EXECUTIONS_PROCESS setting (default: own, main)
  • Operating system: Unraid OS

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Ok, i did test the workflow once again with an xlsx and csv file with 200k rows.
Both did finish:
XLSX - 17:30m
CSV - 1:16m

I’m seriously starting to think that the main problem may be CPU power? But then again the CPU should do allright with such workloads…

Some more experimenting.
Did more tests on smaller xlsx files and i can’t really figure out his behaviour.
The flow works for a file that’s 54MB but freezes on a 58MB file. It’s just stuck on 100% CPU usage for hours.

And another update.
It seems the problem is with getting the file, not with transforming it. I’m looking at the docker stats for n8n and the NET I/O doesn’t even budge after running the workflow with the bigger file.
And frankly - i’m out of ideas what to do.