I’m building a workflow on the n8n cloud-hosted version where:
- CSV files are fetched from FTP
- Parsed and normalized
- Business rules are applied (including mapping, cleaning, merging)
- A final combined CSV is generated
The problem is that processing is extremely slow. For around 20MB of CSVs (~20k–30k rows total), the workflow is still running after 105 minutes.
How can I optimize this workflow to handle large data sets efficiently?
Any recommended approach for offloading heavy processing (e.g. via external services, batching, or background execution)?
What is the error message (if any)?
No specific error. It’s just very slow and never finishes in a reasonable time.
Please share your workflow
Share the output returned by the last node
Currently, it hasn’t reached the final node. Workflow is still running after 105 mins.
Information on your n8n setup
I am using cloud version.