N8N_DEFAULT_BINARY_DATA_MODE=filesystem doesn't work for AWS S3 large file downloads

Describe the problem/error/question

I get OOM when my AWS S3 node tries to download file. I have binary data mode set to filesystem as others suggest:

$ env | grep BINARY
N8N_BINARY_DATA_MANAGER_MODE=filesystem
N8N_DEFAULT_BINARY_DATA_MODE=filesystem

But it still consumes all of my RAM memory and eventually I get OOM. I use “AWS node” to download file and then pipe it to “Write Files to Disk” to persist it to my folder. It fails during AWS node execution, so I don’t think it has anything to do with piping.

What is the error message (if any)?

n8n may have run out of memory while running this execution

Please share your workflow

Information on your n8n setup

  • n8n version: 2.1.3
  • Database (default: SQLite): postgresdb
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: Linux

Hey @InScienceWeTrust, the filesystem binary mode settings you have are correct but unfortunately the AWS S3 node still loads the entire file into memory before it can write to the filesystem storage, so for really large files you’re going to hit OOM regardless of those env vars. This is kind of a known limitation with how the node handles downloads internally.

What I’d suggest is skipping the S3 node entirely for big files and using the Execute Command node or a Code node to run the AWS CLI directly, something like `aws s3 cp s3://bucket/key /home/node/.n8n-files/filename` which will stream the file properly without loading it all into RAM. You’ll need to make sure the AWS CLI is installed in your container and credentials are configured (either mount your ~/.aws folder or set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars).

Got it, that was my plan B, just wanted to double check this before pulling a trigger. Thank you @achamm