Memory Issue with Large CSV File Processing in n8n - Need Assistance

Hi everyone,

I hope you’re all doing well. I’ve encountered an issue with n8n that I’m hoping to get some assistance with.

Here’s the rundown: I have a workflow that includes a node responsible for downloading a 10MB CSV file. This file typically contains about 200,000 rows of data. After this, another node takes this data (in binary CSV format) and is supposed to convert it into a spreadsheet.

However, I’ve been encountering a problem where n8n fails to complete the job. After investigating, it appears the issue is related to limited memory. Given the size of the CSV file and the number of rows it contains, I’m suspecting that the process is too memory-intensive and is causing the workflow to fail.

Has anyone else encountered a similar issue when working with large datasets in n8n? If so, could you please share how you managed to resolve the problem?

Additionally, I would be grateful for any recommendations on optimizing memory usage or any alternative approaches to efficiently process large CSV files within n8n. Would increasing the memory allocation specifically for n8n be a viable solution, or are there more efficient ways to handle this, perhaps by optimizing the workflow or breaking down the CSV processing into smaller, more manageable chunks?

I appreciate any insights or suggestions you can provide. Looking forward to hearing your thoughts.

Thank you in advance!

Hi @carnation7242 I also had this “issue” and I solved splitting the processing file flow in another workflow
Can you please have a look on this thread and see if it can works for you?
Best

1 Like

@carnation7242 We reduced the memory usage of CSV parsing quite a bit in 1.6.0.
What version of n8n are you running?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.