N8N_DEFAULT_BINARY_DATA_MODE=filesystem does not take effect

Hi,

i have setup a hetzner docker installation like described here: Hetzner | n8n Docs

Now i was making a workflow by downloading a file (~3GB) on google drive and then send this to youtube.
My server only has 4GB of RAM, so i thought i am setting up N8N_DEFAULT_BINARY_DATA_MODE=filesystem like found in many topics.

But it does not have an effect, it still uses the RAM.

What i did

  • changed .env
  • changed docker-compose.yml
  • docker compose up -d --build

Output is " n8n may have run out of memory while running this execution. More context and tips on how to avoid this in the docs"

  • n8n version: 1.95.3
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: Linux

You need to try making some changes:

  1. Correct environment variable, the official documentation specifies that N8N_DEFAULT_BINARY_DATA_MODE String default — The default mode is to keep binary data in memory. Change to filesystem to store them on disk (or S3)

  2. Pass it through docker-compose.yml. Although there isn’t an exact snippet in the documentation, the section on environmental variables clearly states that it should be set as the container environment, not just in .env:
    N8N_DEFAULT_BINARY_DATA_MODE=filesystem

  3. Storage Path
    The docs indicate that the filesystem mode will save the data in:

N8N_USER_FOLDER/binaryData
(which in Docker corresponds to /home/node/.n8n/binaryData)
  1. Clean Restart of the Container
    Although it doesn’t appear directly in the docs, the recommended practice in Docker is:
docker compose down -v
docker compose up -d --build

and then verify with:
docker exec -it <n8n> env | grep BINARY
This is supported by the community, which suggests using down -v to remove volumes and ensure the container loads the new variable

2 Likes

Hi!

  1. done, its in the .env
  2. docker-compose.yml is also fine (see below)
  3. ok, what do you want to say? the folder is there and writable (from older smaller files)
  4. thanks, tried that now too
  5. Verification was fine

here is the docker-compose.yml (unchanged from n8n exception the added env variables)

services:
  caddy:
    image: caddy:latest
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - caddy_data:/data
      - ${DATA_FOLDER}/caddy_config:/config
      - ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile

  
n8n:
    image: docker.n8n.io/n8nio/n8n
    restart: always
    ports:
      - 5678:5678
    environment:
      - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - NODE_ENV=production
      - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - NODE_OPTIONS=--max_old_space_size=1024
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
    volumes:
      - n8n_data:/home/node/.n8n
      - ${DATA_FOLDER}/local_files:/files

volumes:
  caddy_data:
    external: true
  n8n_data:
    external: true

Still its NOT working. I did not change anything in the container so permissions must be good, right?

Is there any error log i can see maybe?
I dont understand, why its using memory still :frowning:

I understand it that way, that it takes “max_old_space_size” as maximum RAM usage, then writes the file.
Or n8n writes directly, also ok.
But in both cases it shall work, right?

Here is also a screenshot of my server, when downlading the video from Google Drive.

image

It just stops at short before maximum RAM, process is still running for like 1-2 minutes and then its canceled at some point.

2 Likes

Hi!

Its still not working.
I have updated to the current version a few weeks later, i thought maybe it works now, but its not.

This is the output from the CLI

And here you can see the process (htop)
image

So you see, its still getting that the var is changed, but Google Drive file download still uses just RAM at this point.
Why is that?

Also you can see that in the binaryData folder no files are written.

PS: Or is the error in the Google Drive node? Is that possible?

Partially ashamed i am now posting myself the answer :rofl:
Its working correctly, the problem is the Google Drive download, which is simply ignoring the setting.
I now solved this via HTTP Request and then could upload the video to youtube.
RAM is not nearly at maximum now and server is not lagging at all.
Sorry for disturbing.
Whoever develops the Google Drive node shall really work on this because i think this could make life much easier.

FYI: I now realised this via a direct link: https://drive.usercontent.google.com/download?id={{ $json.id }}&export=download&confirm=t&uuid=n8n

This is working for now, its also ignoring the “Can not be scanned by virus scan” message because i am faking a uuid and confirm value.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.