Hi there!
I have a problem constructing a flow.
The situation is as follows:
dozens of xlsx files, each of them 80-120MB and 800-1000k rows
MariaDB database
The optimal usage scenario looks like this: after uploading the file to the correct network folder, n8n uploads the contents of the file to a specific table in the database.
This is where the problem arises. The workflow I created works fine with CSV files (pre-converted xlsx files), it also works directly with xlsx files <200k rows (all <40sec).
When I run the test with the target xlsx file (about 80MB), the CPU usage jumps to 100% and stays at that level for hours without any effect.
I am running out of ideas and have no idea where else to look for a solution.
The n8n is self-hosted on an i5 7400 + 40G of RAM machine. Single CPU core usage is ~100% at runtime, RAM usage < 3G
Some more experimenting.
Did more tests on smaller xlsx files and i can’t really figure out his behaviour.
The flow works for a file that’s 54MB but freezes on a 58MB file. It’s just stuck on 100% CPU usage for hours.
And another update.
It seems the problem is with getting the file, not with transforming it. I’m looking at the docker stats for n8n and the NET I/O doesn’t even budge after running the workflow with the bigger file.
And frankly - i’m out of ideas what to do.