Setting N8N_DEFAULT_BINARY_DATA_MODE doesn't seem to help with memory

Describe the problem/error/question

I’m self-hosting n8n, and i’ve set N8N_DEFAULT_BINARY_DATA_MODE=filesystem. I’ve purposely set container memory limit low to 200mb to see how the filesystem settings work. And I’m downloading a video from Dropbox thats ~400mb. I thought the filesystem
saves all binary data into disk Not memory. Am I missing something here?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)


## Share the output returned by the last node
Out of Memory

## Information on your n8n setup
- **n8n version: 1.59.4
- **Database: Postgres
- **Running n8n via Docker
- **Operating system:**

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I think the issue here is that 200MB might be a bit low. Try increasing the limit to 300MB instead.
Filesystem mode helps avoid loading large binary files into memory, but we still need memory to load all the node packages, and all the execution data.

While 300MB sounds like a lot of memory for a simple operation like this, it used to take a lot more memory in the past. We are constantly working on reducing n8n’s memory usage whenever we can.
So, someday you might be able to run all of this in under 200MB, but for now you need to provide more memory for the application to be able to perform properly.

2 Likes

Ok, i bumped it up to 500mb and its still hitting memory limit. The file im trying to download is about 400mb. Something is not adding up. I verified N8N_DEFAULT_BINARY_DATA_MODE is set to filesystem inside in the n8n instance.

You cant process that huge amount with less processing power. The issue may lie in the RAM not default datasize

I had to bump memory up to 1g in order for the workflow to finish. Is there a way to track how much memory each node eats up during execution? Or is the only approach to do trial and errors?

Since you needed 1G of memory to process a simple file uploads, it’s either that your n8n instance is somehow not respecting the N8N_DEFAULT_BINARY_DATA_MODE env variable, or the nodes you are using aren’t updated to use file streaming, instead of buffering.

Yeah. this is an issue in the Dropbox node. It has not been updated to use file streaming. Which means that all uploads have to be first buffered in memory.

I’ve created an internal ticket NODE-1898 to get this fixed.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.