Database.sqlite Error: database or disk is full

Hello, the disk from my VPS is almost full and im trying to VACUUM the database:
781074 13700076 -rw-r–r-- 1 ubuntu ubuntu 14028853248 Sep 27 19:12 /root/n8n/.n8n/database.sqlite

but i get error: sqlite> VACUUM;
Error: database or disk is full

I’ve already requested the increase of the disk with the provider, but I’m not getting a return and I need to solve it urgently.
Is there any way to solve this?

why need disk space if i want to reduce?

my config is docker-compose.

environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER
- N8N_BASIC_AUTH_PASSWORD
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_TUNNEL_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- DB_SQLITE_VACUUM_ON_STARTUP=true

Have you tried setting the data prune and data Max age options then restarting the container and seeing if that helps?

Yes, after posting here, I tried it, and not work.

environment:
  - N8N_BASIC_AUTH_ACTIVE=true
  - N8N_BASIC_AUTH_USER
  - N8N_BASIC_AUTH_PASSWORD
  - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
  - N8N_PORT=5678
  - N8N_PROTOCOL=https
  - NODE_ENV=production
  - WEBHOOK_TUNNEL_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
  - DB_SQLITE_VACUUM_ON_STARTUP=true
  - EXECUTIONS_DATA_PRUNE=true
  - EXECUTIONS_DATA_MAX_AGE=168

Do you still get the same error when trying to run the vacuum? It can take a bit of time on startup to kick in.

My general recommendation if planning to use the SQLite option is below, it will save very little data as having the successful data being stored isn’t normally needed and 90 days of data for us is enough. Of course the vacuum option as well is handy.

 - EXECUTIONS_DATA_SAVE_ON_ERROR=all
 - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
 - EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
 - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
 - EXECUTIONS_DATA_PRUNE=true
 - EXECUTIONS_DATA_MAX_AGE=90
1 Like

I have similar error,
This is my yaml file what cause error:

version: "3"

services:
  n8n:
      image: n8nio/n8n
      restart: always
      ports:
        - "127.0.0.1:5678:443"
      environment:
        - N8N_BASIC_AUTH_ACTIVE=true
        - N8N_BASIC_AUTH_HASH=true
        - N8N_BASIC_AUTH_USER
        - N8N_BASIC_AUTH_PASSWORD
        - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
        - N8N_PORT=443
        - N8N_PROTOCOL=https
        - NODE_ENV=production
        - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
        - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
        - NODE_FUNCTION_ALLOW_EXTERNAL=moment,lodash
        - EXECUTIONS_DATA_PRUNE=true
        - EXECUTIONS_DATA_PRUNE_TIMEOUT=604800
        - DB_SQLITE_VACUUM_ON_STARTUP=true
      volumes:
        - /home/${SERVER_USERNAME}/.n8n:/home/node/.n8n
        - /home/${SERVER_USERNAME}/n8n-local-files:/files

This config does not make error

version: "3"

services:
  n8n:
      image: n8nio/n8n
      restart: always
      ports:
        - "127.0.0.1:5678:443"
      environment:
        - N8N_BASIC_AUTH_ACTIVE=true
        - N8N_BASIC_AUTH_HASH=true
        - N8N_BASIC_AUTH_USER
        - N8N_BASIC_AUTH_PASSWORD
        - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
        - N8N_PORT=443
        - N8N_PROTOCOL=https
        - NODE_ENV=production
        - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
        - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
        - NODE_FUNCTION_ALLOW_EXTERNAL=moment,lodash
        - EXECUTIONS_DATA_SAVE_ON_ERROR=all
        - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
        - EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
        - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
        - EXECUTIONS_DATA_PRUNE=true
        - EXECUTIONS_DATA_MAX_AGE=90
      volumes:
        - /home/${SERVER_USERNAME}/.n8n:/home/node/.n8n
        - /home/${SERVER_USERNAME}/n8n-local-files:/files

If I add - DB_SQLITE_VACUUM_ON_STARTUP=true it causes the following error:

It strange bc I remember that this config was working fine…

Hey @Shirobachi,

Which error are you looking at there? The sqlite full one or where the process was stopped? You don’t have max age in your config that has an error either.

1 Like

Adding age what I forgot helped, thanks! <3

@Shirobachi The vacuum command kind of creates a new database with only the required data. It seems like you do not have space to do that right now. So you either have to temporarily increase the storage size (if possible) or make space another way so that this process can run at least one time. If you then set also the other variables as mentioned above, the database should hopefully never get that large ever again.

1 Like

Oh thanks for that, this is useful information, now db is only 15M, can you confirm that following config will prevent db from being 8GB (like was now) I really am sure that I config this before, but maybe I did sth wrong…

Thanks in advacne!

version: "3"

services:
  n8n:
      image: n8nio/n8n
      restart: always
      ports:
        - "127.0.0.1:5678:443"
      environment:
        - N8N_BASIC_AUTH_ACTIVE=true
        - N8N_BASIC_AUTH_HASH=true
        - N8N_BASIC_AUTH_USER
        - N8N_BASIC_AUTH_PASSWORD
        - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
        - N8N_PORT=443
        - N8N_PROTOCOL=https
        - NODE_ENV=production
        - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
        - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
        - NODE_FUNCTION_ALLOW_EXTERNAL=moment,lodash
        - EXECUTIONS_DATA_PRUNE=true
        - EXECUTIONS_DATA_PRUNE_TIMEOUT=604800
        - DB_SQLITE_VACUUM_ON_STARTUP=true
        - EXECUTIONS_DATA_MAX_AGE=90
      volumes:
        - /home/${SERVER_USERNAME}/.n8n:/home/node/.n8n
        - /home/${SERVER_USERNAME}/n8n-local-files:/files

Hey @Shirobachi,

Based just on that config I can’t promise anything about the database size, It all depends on what you are logging and how often your workflows are running.

An example here would be if you are saving all execution data over 90 days for a task that runs every minute it is likely going to store more data than a task that runs weekly or once a day this of course does also depend on how much data is being processed.

What I do is turn off all logging apart from errors for all workflows as a default option…

 - EXECUTIONS_DATA_SAVE_ON_ERROR=all
 - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
 - EXECUTIONS_DATA_SAVE_ON_PROGRESS=false
 - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false

I also now only use a max age of 14 as I will generally know within 2 weeks if something has failed and will have it fixed.

Annoyingly there is no magic answer, I got to my preferred options over the course of 6 months of playing with n8n and I have found what works for me and the company I worked for previously.

Not sure if this answers your question but hopefully it will help.

1 Like

Sure it is, thanks it has make sense I’ll change it like you did and see if it works for me like I assume, thank you :slight_smile:

2 Likes

Is it okay to leave EXECUTIONS_DATA_PRUNE=true on permanently, or is this something you set once and then run again when needed?

You can leave it on permanently. The only negative effect it will have, is that the n8n startup time will be slightly longer.

Hi,
I’m also facing that now everyday
Problem running workflow SQLITE_FULL: database or disk is full

Here my environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
- EXECUTIONS_DATA_SAVE_ON_PROGRESS=false
- EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
- EXECUTIONS_DATA_MAX_AGE=90

:grimacing: I need anyway to restart manually my VPS server everyday

This VPS server has the following configuration, so I doubt I’ll be short of space.
CPU : 1 vCore
RAM : 0.5 Go
SSD : 10 Go

Any idea about a misconfiguration?

Hello - how to clean not wanted binary data if exist

1 Like

Hey @Comps_Cedric,

Welcome to the community :cake:

The settings look to be ok, Have you checked how big your database is and how much free space is available?

Hello @Jon

Yes exactly, I have 1,5gb of free space based on these 10gb

I decided to migrate to another VPS with 100GB for more sustainability.

Hey @Comps_Cedric,

That sounds like it would do the job although in theory you should be ok with just 20GB if it is being managed.

1 Like

Ok so the recommanded configuration should be 20GB, good to know. Thank you

Hey @Comps_Cedric,

It isn’t a recommendation as such :slightly_smiling_face: