[N8N] database.sqlite size optimization

Hello everyone, can you help me please

I would like to know why database.sqlite generated in /opt/n8n_XX consumes a lot of resources (7.6GB in size).

This SQLite database is probably used by the N8N application to store important information, such as configurations, workflows, event logs and other data relevant to the operation of the application.

How can I optimize it or reduce its size? I don’t know, but in the future I’m going to exceed the database.sqlite size limit.

Below are my docker configurations of the N8N installation on our server.

.env
QUEUE_BULL_REDIS_HOST=redis

COMPOSE_HTTP_TIMEOUT=3600

Set the logging level to ‘debug’

N8N_LOG_LEVEL=debug

Set log output to both console

N8N_LOG_OUTPUT=console

Set a 50 MB maximum size for each log file

N8N_LOG_FILE_MAXSIZE=500

EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_PROGRESS=false
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false

EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000

docker-compose.yml
version: ‘3.1’

services:

postgres:
image: postgres:11
container_name: postgres
restart: always
ports:
- 5435:5432
env_file:
- .env
volumes:
- /opt/n8n-XXX/data/database/postgresql:/var/lib/postgresql/data
networks:
- servernet

redis:
image: redis:6-alpine
container_name: redis
restart: always
volumes:
- redis_storage:/data
healthcheck:
test: [“CMD”, “redis-cli”, “ping”]
interval: 5s
timeout: 20s
retries: 10

mongo:
image: mongo:4.4
env_file:
- .env
ports:
- “27018:27017”
networks:
- servernet
volumes:
- my-mongo-volume:/data

n8n:
image: n8nio/n8n:1.5.1
labels:
- io.portainer.accesscontrol.teams=developpement
restart: always
env_file:
- .env
ports:
- 5678:5678
extra_hosts:
- “DNS.XXX.XXX:IP.XX.XXX.XXX”
links:
- postgres
- redis
- mongo
labels:
- traefik.enable=true
- traefik.http.routers.n8n.rule=Host(${SUBDOMAIN}.${DOMAIN_NAME})
- traefik.http.routers.n8n.tls=true
- traefik.http.routers.n8n.entrypoints=web,websecure
- traefik.http.routers.n8n.tls.certresolver=mytlschallenge
- traefik.http.middlewares.n8n.headers.SSLRedirect=true
- traefik.http.middlewares.n8n.headers.STSSeconds=315360000
- traefik.http.middlewares.n8n.headers.browserXSSFilter=true
- traefik.http.middlewares.n8n.headers.contentTypeNosniff=true
- traefik.http.middlewares.n8n.headers.forceSTSHeader=true
- traefik.http.middlewares.n8n.headers.SSLHost=${DOMAIN_NAME}
- traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true
- traefik.http.middlewares.n8n.headers.STSPreload=true
- traefik.http.routers.n8n.middlewares=n8n@docker
volumes:
- /opt/n8n_XX:/home/node/
- /opt/sftp-n8n/data/uploads:/home/data
command: “start”
depends_on:
- postgres
- redis
- mongo
networks:
- servernet

networks:
servernet:
driver: bridge

volumes:
n8n_storage:
redis_storage:
my-mongo-volume:
external: false

Information on your n8n setup

  • n8n 1.5.1
  • Database using:Postgresql
  • **third-party database: Baserow
  • Running n8n with the execution process [own(default), main]:own
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]:Docker

Hi @Micka_Rakotomalala, the majority of this data is usually execution data. You have already configured some sensible data pruning settings.

However, specifically for SQLite you’d also need to either set the DB_SQLITE_VACUUM_ON_STARTUP=true environment variable to have n8n free up disk space on the next restart or manually run the VACUUM operation. This will rebuild your database file and repack it into a minimal amount of disk space.

Just to make sure you’re aware that this will temporarily require a bit more disk space as SQLite will create the new (smaller) database file first before deleting the older (bigger) one.

Hope this helps and makes sense :slight_smile:

Hello @MutedJam

Thank you very much for your quick feedback, it will help me a lot to manage this size of database.sqlite.

Yes for storage, I have some free space.

1 Like

You’re most welcome! Just one more warning, this can take quite a bit of time, especially when done for the first time. So you might want to do this outside your core business hours.

if it’s in my .env environment, then I have to restart the container, so for the next run, I also have to restart my container, or it’s automatic once it’s in my .env

It seems you’re using the .env_file element in your docker-compose.yml file. So you’d first need to update the your .env file, then restart your n8n container to pick up this change and have n8n run the vacuum operation.

Right, and isn’t there a specific command for n8n to start the vacuum operation?
Like this command: “start” ? Or just restart the container and the rest happens automatically.

Or just restart the container and the rest happens automatically

Exactly. Nothing else is needed apart from the variable :slight_smile:

very clear, thank you very much @MutedJam

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.