Hi everyone,
I hope you’re all doing well. I’ve encountered an issue with n8n that I’m hoping to get some assistance with.
Here’s the rundown: I have a workflow that includes a node responsible for downloading a 10MB CSV file. This file typically contains about 200,000 rows of data. After this, another node takes this data (in binary CSV format) and is supposed to convert it into a spreadsheet.
However, I’ve been encountering a problem where n8n fails to complete the job. After investigating, it appears the issue is related to limited memory. Given the size of the CSV file and the number of rows it contains, I’m suspecting that the process is too memory-intensive and is causing the workflow to fail.
Has anyone else encountered a similar issue when working with large datasets in n8n? If so, could you please share how you managed to resolve the problem?
Additionally, I would be grateful for any recommendations on optimizing memory usage or any alternative approaches to efficiently process large CSV files within n8n. Would increasing the memory allocation specifically for n8n be a viable solution, or are there more efficient ways to handle this, perhaps by optimizing the workflow or breaking down the CSV processing into smaller, more manageable chunks?
I appreciate any insights or suggestions you can provide. Looking forward to hearing your thoughts.
Thank you in advance!