How to delete execution data – no changes after adding env variables

Hello :wave:
I have a problem woth deleting old execution data.

Describe the issue/error/question

My n8n instance is running out of storage. Execution data are stored and not deleted regularly. Based on docs, I have added these environment variables listed below in docker-compose-yml.

Thanks for your help!

version: "3"

services:
  traefik:
    image: "traefik"
    restart: always
    command:
      - "--api=true"
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
      - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.mytlschallenge.acme.tlschallenge=true"
      - "--certificatesresolvers.mytlschallenge.acme.email=${SSL_EMAIL}"
      - "--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ${DATA_FOLDER}/letsencrypt:/letsencrypt
      - /var/run/docker.sock:/var/run/docker.sock:ro

  n8n:
    image: n8nio/n8n
    restart: always
    ports:
      - "127.0.0.1:5678:5678"
    labels:
      - traefik.enable=true
      - traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}.${DOMAIN_NAME}`)
      - traefik.http.routers.n8n.tls=true
      - traefik.http.routers.n8n.entrypoints=web,websecure
      - traefik.http.routers.n8n.tls.certresolver=mytlschallenge
      - traefik.http.middlewares.n8n.headers.SSLRedirect=true
      - traefik.http.middlewares.n8n.headers.STSSeconds=315360000
      - traefik.http.middlewares.n8n.headers.browserXSSFilter=true
      - traefik.http.middlewares.n8n.headers.contentTypeNosniff=true
      - traefik.http.middlewares.n8n.headers.forceSTSHeader=true
      - traefik.http.middlewares.n8n.headers.SSLHost=${DOMAIN_NAME}
      - traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true
      - traefik.http.middlewares.n8n.headers.STSPreload=true
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER
      - N8N_BASIC_AUTH_PASSWORD
      - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - NODE_ENV=production
      - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - EXECUTIONS_DATA_SAVE_ON_ERROR=all
      - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
      - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=168
      - DB_SQLITE_VACUUM_ON_STARTUP=true
    volumes:
      - ${DATA_FOLDER}/.n8n:/home/node/.n8n

Information on your n8n setup

  • n8n version: 0.194.0
  • Database you’re using (default: SQLite): default SQLite
  • Running n8n with the execution process [own(default), main]: default own
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: docker

Keep in mind that clearing data may take up to two restarts, if I remember correctly, did you restart your instance two times or more?

I always restarted the instance only once. The storage drops from 100 % to 56 %. In a week, it is back to 100 %. Is it possible that some older data can’t be purged? In this instance, there’s only one workflow with a bigger amount of data (1MB), and it runs once per day.

I have restarted I twice, it dropped again.

Is my env set up correct?
Thank you.

Hey @honzapav,

I would maybe change the max age from 168 to 30, The vacuum process to free up the space only happens when the service restarts as well so it could be worth setting up regular restarts.

It shouldn’t be growing that quickly if it is just 1MB once a day so it sounds like there could be more going on, The other side of it of course could be that there is not much space on the disk to begin with.

1 Like

Hi @Jon – thanks for the answer.
My n8n instance runs on a DigitalOcean droplet with 25 GB storage, and it is the only thing there… so running out of space was surprising :smile:

I will decrease the max_age and we’ll see.
Just to be sure – by “restarting the service” you mean to restart the docker instance. Is that correct?

Hey @honzapav,

Just a docker restart {container_name} would be enough.

Thank you!

Hello, it works now.

There was also a “side problem”. After the service restart, the droplet storage usage dropped to about 53 % which was suspicious. I used the du -h command to see the disk usage, and there were many unused docker images (about 7GB). I cleaned it with docker system prune and now at 15 % of storage.

So it might be helpful for someone.

Thanks both @Shirobachi and @Jon for patience! :+1:

1 Like

Hey @honzapav,

That is good to hear, I always forget about docker system prune