Big XLSX files to DB stuck on reading bin file

Hi there!
I have a problem constructing a flow.
The situation is as follows:

  • dozens of xlsx files, each of them 80-120MB and 800-1000k rows
  • MariaDB database

The optimal usage scenario looks like this: after uploading the file to the correct network folder, n8n uploads the contents of the file to a specific table in the database.

This is where the problem arises. The workflow I created works fine with CSV files (pre-converted xlsx files), it also works directly with xlsx files <200k rows (all <40sec).
When I run the test with the target xlsx file (about 80MB), the CPU usage jumps to 100% and stays at that level for hours without any effect.

I am running out of ideas and have no idea where else to look for a solution.

The n8n is self-hosted on an i5 7400 + 40G of RAM machine. Single CPU core usage is ~100% at runtime, RAM usage < 3G

Information on your n8n setup

  • n8n version: 1.7.62
  • Database: SQLite
  • Running n8n via Docker
  • n8n EXECUTIONS_PROCESS setting (default: own, main)
  • Operating system: Unraid OS

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Ok, i did test the workflow once again with an xlsx and csv file with 200k rows.
Both did finish:
XLSX - 17:30m
CSV - 1:16m

I’m seriously starting to think that the main problem may be CPU power? But then again the CPU should do allright with such workloads…

Some more experimenting.
Did more tests on smaller xlsx files and i can’t really figure out his behaviour.
The flow works for a file that’s 54MB but freezes on a 58MB file. It’s just stuck on 100% CPU usage for hours.

And another update.
It seems the problem is with getting the file, not with transforming it. I’m looking at the docker stats for n8n and the NET I/O doesn’t even budge after running the workflow with the bigger file.
And frankly - i’m out of ideas what to do.

Another day - another problem. (And i didn’t even resolve the first one!).
I was doing some more testing and:

XLSX files

  • 56 033KB file - works flawlessly
  • 57 986KB file - freezes
  • 95 823KB file - and that’s the funny part! n8n finishes the workflow within a couple of seconds. No errors, no freezing. And no result. Not a single line was inserted into into SQL.

CSV files
I basically gave up with xlsx files and went the “convert it to csv first” route. But there it is - another big file problem.
an 187 619KB CSV file. Exactly the same as with the 90MB XLSX file. The process just gets stuck on the Extract from file node.

2025-02-13T07:44:02.504Z | debug | Running node "Extract from File" started {"node":"Extract from File","workflowId":"TXX8tCNC5wYIaLvC","file":"LoggerProxy.js","function":"exports.debug"}

And now i’m really out of ideas.
The container has 60G of ram, but n8n uses no more than 3-3.3G
N8N_DEFAULT_BINARY_DATA_MODE is set to filesystem

I’m stuck. Just completely stuck.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.