Retrieving data from a Postgres table and inserting into a CSV gives an Out of Memory error

Describe the problem/error/question

I am trying to retrieve data from a PostgreSQL table and insert it into a CSV file and then push the file to S3 bucket. The table has around 170,000 rows and should be around 40MB. But when I run the workflow it gives me an out of memory error. If I disable the Spreadsheet node and run the workflow I am still getting the same error.
I tried to call this workflow from a separate parent workflow as well but with no luck. Any help would be appreciated.

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

The last node should return a CSV binary data of around 40MB size.

Information on your n8n setup

  • n8n version: 1.16.0
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: Windows

Hey @Kratik_Mehta,

Welcome to the community :cake:

At 170,000 records I suspect you are hitting the memory limit for your cloud plan, I don’t think we have a way to append to a spreadsheet either so working around this could be tricky.

Thanks @Jon - the file is a csv/text file. Does another node support appending data to a flat file? Even though the job processes 170k rows of data, the data is relatively lightweight (40-80mb total). It doesn’t seem that this would or should cause any type of memory issues even if the dataset is processed in a single batch.

We are also encountering the same out of memory issues in a self-hosted instance of the community version. Using a t3.large instance in AWS (2 vCPUs, 8GB RAM), a similar job with a total data size of around 150mb errors out for memory-related issues.

We have followed the guidance here (Memory-related errors | n8n Docs), but without much success. Are there other recommendations related to either workflow design or resource provisioning that we should follow? Does upgrading the cloud subscription to the Pro tier provide access to more compute resources? Thanks.

Hey @dahmadi,

It is important to note that while the 170k rows of data might only be 40-80mb when written to disk there are a number of things that happen in n8n in memory to format the data and convert it to the data format we use internally which will consume memory.

If you can share your cloud username I can take a quick look at the logs for your instance to confirm if this is a case of the memory running out. The way I would handle this is to read the data from the database in smaller chunks and create multitple spreadsheet files or add them to something like a google sheet then download that as a CSV file using the Google Drive node.

Thanks @Jon. Sharing my username/email in a dm.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.