But it still consumes all of my RAM memory and eventually I get OOM. I use “AWS node” to download file and then pipe it to “Write Files to Disk” to persist it to my folder. It fails during AWS node execution, so I don’t think it has anything to do with piping.
What is the error message (if any)?
n8n may have run out of memory while running this execution
Hey @InScienceWeTrust, the filesystem binary mode settings you have are correct but unfortunately the AWS S3 node still loads the entire file into memory before it can write to the filesystem storage, so for really large files you’re going to hit OOM regardless of those env vars. This is kind of a known limitation with how the node handles downloads internally.
What I’d suggest is skipping the S3 node entirely for big files and using the Execute Command node or a Code node to run the AWS CLI directly, something like `aws s3 cp s3://bucket/key /home/node/.n8n-files/filename` which will stream the file properly without loading it all into RAM. You’ll need to make sure the AWS CLI is installed in your container and credentials are configured (either mount your ~/.aws folder or set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars).