Reducing the size of the PostgreSQL dabase

Describe the problem/error/question

I recently noticed that the PostgreSQL database of our production n8n installation is already over 30GB. It looks like we have over 2500000 records of Executions Data. I added the following variables to our docker-compose.yml and restarted n8n with the “docker-compose up -d” command.

      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=336
      #- EXECUTIONS_DATA_PRUNE_MAX_COUNT=1400000

Unfortunately it looks like nothing happens regarding the Executions Data.
I set up separate n8n test system and it works there. But I could only test with a couple of hundred Execution Date Records not millions.
Does anyone have an idea how I could reduce the database size in a proper manner? Or perhaps I’m missing something?

Information on your n8n setup

  • n8n version: 0.236.3
  • Database: PostgreSQL 14
  • Running n8n via: docker compose
  • Operating system: debian 11

Here is the complete docker-compose.yml I’m currently using. Sry it’s a bit messy because it grew alongside my experience with n8n. I enjoy using n8n by the way : )

version: "3.3"
services:
  db:
    image: postgres:14
    volumes:
      - /opt/n8n/postgres/db:/var/lib/postgresql/data
      - /opt/n8n/log:/var/lib/postgresql/log
      #- /etc/timezone:/etc/timezone:ro
      #- /etc/localtime:/etc/localtime:ro
    environment:
      - TZ=Europe/Berlin
      - POSTGRES_DB=n8n_db
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=xxx
    #networks:
      #- db
  web:
    image: n8nio/n8n:0.236.3
      #image: n8nio/n8n:0.218.0
    volumes:
      - /opt/n8n/n8n:/home/node/.n8n
      - /opt/n8n/backup/auto_n8n_workflow_exports:/data
      - /opt/n8n/log:/home/node/log
      #- /etc/timezone:/etc/timezone:ro
      #- /etc/localtime:/etc/localtime:ro
    ports:
      - "5678:5678"
    environment:
      - TZ=Europe/Berlin
      - NODE_OPTIONS=max_old_space_size=3000
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n_db
      - DB_POSTGRESDB_HOST=db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=postgres
      #- DB_POSTGRESDB_SCHEMA=public
      - DB_POSTGRESDB_PASSWORD=xxx
      - N8N_HOST=n8n.xxx.com
      - N8N_EMAIL_MODE=smtp
      - N8N_SMTP_HOST=smtp.xxx.com
      - N8N_SMTP_PORT=587
      - [email protected]
      - N8N_SMTP_PASS=xxx
      - [email protected]
      - N8N_SMTP_SSL=false
      - N8N_SSL_KEY=/home/node/.n8n/cert.key
      - N8N_SSL_CERT=/home/node/.n8n/fullchain.cer
      - N8N_PROTOCOL=https
      - N8N_LOG_LEVEL=debug
      - N8N_LOG_OUTPUT=file
      - N8N_LOG_FILE_LOCATION=/home/node/log/n8n.log
      - N8N_LOG_FILE_MAXSIZE=16
      - N8N_LOG_FILE_MAXCOUNT=50
      - NODE_TLS_REJECT_UNAUTHORIZED=0
      - GENERIC_TIMEZONE=Europe/Berlin
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=336
      #- EXECUTIONS_DATA_PRUNE_MAX_COUNT=1400000
      #- N8N_DEFAULT_LOCALE=en
    depends_on:
      - db
    #networks:
      #- web
#networks:
  #web:
    #external: true
  #db:
    #external: false

Thank You!, and Kind Regards,
user575

Hi @user575 :wave: When you say nothing happens with the executions data, is the instance hanging on startup while trying to prune, or does it seem to just ignore the setting?

I also noticed you’re on an older version of n8n - if you try and trigger a workflow (any workflow will do), does the pruning kick off?

1 Like

Hi @EmeraldHerald Thanks for your reply : )
Yes, n8n seems to ignore the setting. When I trigger a workflow the pruning does not kick off. I think maybe the pruning job of the database is not done in small batches but just one single job and it’s too much… not sure…

Hi @user575 , no worries :slight_smile: It sounds like there may be far too much data to prune - you could go in and manually delete some of the entires in the executions_entity table in order to clear out some data, as a last resort :thinking:

3 Likes

Hi @EmeraldHerald good tip! : )
I deleted the entire content of the table not just some of the entries. It solved the problem and reduced the size of the database form 28GB to 739MB. For reference, here are the commands I used:

DELETE FROM execution_entity;
VACUUM (VERBOSE, ANALYZE) execution_entity;
VACUUM (VERBOSE, ANALYZE) execution_data;

Thank You very much for your help! : )
and Kind Regards,
user575

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.