N8n storage consumption

Hosting n8n in a droplet

is this normal? (80%+)

1 Like

Could you provide some more details on your n8n setup and workflow load?

It could be normal, if there are lots of active workflows with lots of data thruput.

I have ~60 workflows

I did df -h in the Droplet console, and got these details:

I have another similar n8n instance in Digital Ocean, with the same work-load (~60 workflows), and the usage is way lower:

1 Like

Are you using SQLite? And are you storing lots of successful executions? If so, your database probably keeps growing, and you may want to look into data pruning.

1 Like

Using Docker Compose, not sure which database type it is

And it’s still the same even after removing all the executions

1 Like

Ah! Your question made me realise I have this problem too - my DB file grew to 8.1GB.

I’m using docker compose and was using the following settings (which you can find in the doc I linked to above):

# Docker Compose
n8n:
    environment:
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=168
      - EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000

However, I had missed this part:

If you run n8n using the default SQLite database, the disk space of any pruned data isn’t automatically freed up but rather reused for future executions data. To free up this space configure the DB_SQLITE_VACUUM_ON_STARTUP environment variable or manually run the VACUUM operation.

After adding the DB_SQLITE_VACUUM_ON_STARTUP option to my setup and restarting docker, my DB dropped to 2.1GB. It took a minute or so after restarting for the cleanup to finish.

This is my setup now:

n8n:
    environment:
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=720
      - EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000
      - DB_SQLITE_VACUUM_ON_STARTUP=TRUE

I’m a bit puzzled why we wouldn’t add that option to the example setup by default, I’ll ask.

Hope that helps, and thanks for pointing out something that made my own installation better! :raised_hands:

2 Likes

Awesome!

Applied the same settings

But I still don’t see the percentage dropping

And the n8n instance currently only has ~150 saved executions

Two ideas:

  1. Please validate your YAML - it’s very sensitive to indentation, and I noticed the pruning options are looking different from the rest.
  2. Did you restart your instance? Any errors in the logs?

Hi,
I still have same problem … Looks my setup should “force” VACUUM yet it does nothing. My SQLite database is about 13 GB. I made a copy of it and run sqlite3 vacuum - and file size dropped to about 1.9 GB. Don’t see any errors … So why is VACUUM not working ?
My environment - n8n is run via “docker compose” on Digital Ocean droplet - Ubuntu 22.04.5 LTS.
n8n version is 1.92.2.
Here is my docker compose file

version: “3.7”

services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- “80:80”
- “443:443”
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile

n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- N8N_RUNNERS_ENABLED=true
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- N8N_AUTH_EXCLUDE_ENDPOINTS=api
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
- EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
- EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_PRUNE_MAX_COUNT=10000
- EXECUTIONS_DATA_MAX_AGE=144
- DB_SQLITE_VACUUM_ON_STARTUP=TRUE
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
- /usr/share/fonts/truetype:/usr/share/fonts/truetype/host

volumes:
caddy_data:
external: true
n8n_data:
external: true

At this moment there is about 50GB free space in the filesystem.
Filesystem Size Used Avail Use% Mounted on
tmpfs 392M 1.4M 390M 1% /run
/dev/vda1 78G 29G 50G 37% /

I would check using ‘ncdu’ easy gui interface and drill down into where it’s being used, also you could run something like if you don’t want to install ncdu

du -ah . | sort -rh | head -n 20

the full stop . is path, so just run from root or node user maybe, hopefully that will show you where u disk space is.