Sqlite cleanup (prune + vacuum

Hey folks.

Describe the problem/error/question

sqlite grew to 17Gb, trying to fix this:
added to docker-compose.yml
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
- EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
- EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=72
- DB_SQLITE_VACUUM_ON_STARTUP=true

after this nothing works - cannot access n8n interface

docker log:
Initializing n8n process
n8n ready on 0.0.0.0, port 5678
QueryFailedError: SQLITE_FULL: database or disk is full

Error: Exiting due to an error.
QueryFailedError: SQLITE_FULL: database or disk is full
User settings loaded from: /home/node/.n8n/config

without docker started disk usage looks like this:

Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 23G 6.7G 77% /

and after starting docker - like this:

Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 28G 1.8G 95% /

I assumed this amount of available disk spase should be enough to rebuild DB - but it appeared it wasnt
so VACUUM process easily eats up 5+Gb and crashes afterwards.

researching on the topic, here is what I found: “This means that when VACUUMing a database, as much as twice the size of the original database file is required in free disk space.”

when I turn off vacuum_on_startup - everything starts and works normally

Question1: if I export workflows/creds- remove DB file - import workflows/creds - will webhook URLs be deleted and replaced with new ones? I suppose they wont - I checked exported workflows and see the URLs are stored in workflows. But want to double-check - this is critical.

Question2: if I switch to Postgres - will webhook URLs be preserved?

Question3: Is downloading database.sqlite and trying to vacuum it locally a good way to solve this? (looks like it will take 5+ hours to download it)

Information on your n8n setup

  • **n8n version: 1.49
  • **Database (default: SQLite): sqlite
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • **Running n8n via (Docker, npm, n8n cloud, desktop app): docker compose
  • **Operating system: ubuntu 22.04lts

Hey @AlGryt,

Welcome to the community :tada:

The tricky bit here is you need to make sure there is enough space for it to work on the database as well which you just don’t have. If it was me I would export all of the worklfows and credentials, Delete the database then import them again.

  1. URLs will be the same when you import the workflows again or they at least should be as you found in your small test, Just make sure you also make a note of your encryption key.

  2. Much like the first one this is the same as the data is in the JSON and you will import the json the url should remain the same.

  3. You could solve it this way but only if you know what you are knowing and there is a chance you may make things worse.

Hi Jon. Thanks for the reply, will start from option 1 then. To move creds between instances I export them unencrypted, so usually it works ok.

I also did download a DB file, cleaned EXECUTION_DATA table and did VACUUM with sqlite3.exe - DB file shrinked to 4MB. havent tried to start a container with yet - have to wait till Sunday to stop the server and do this maintenance.

I have a suggestion to update a giude here: Docker Compose | n8n Docs

1/ ‘version’ is depreciated, and ‘name’ is needed now
2/ add those DB maintenance rows in the proposed ‘docker-compose.yml’- probably, this will decrease the number of similar questions

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.