How to shrink massive database.sqlite-wal file?

I’m running n8n self-hosted in docker and looking for some help to shrink the database.sqlite-wal file.

Long story short, my n8n database grew to a few GB in size before I learned about EXECUTIONS_DATA_PRUNE, EXECUTIONS_DATA_MAX_AGE, EXECUTIONS_DATA_PRUNE_MAX_COUNT, and DB_SQLITE_VACUUM_ON_STARTUP env vars. After a restart the database.sqlite file is down to only a few MB in size, however, more than a week later I am stuck with a database.sqlite-wal that is still massive (5GB+). How can I shrink this file? Thanks.

Hey, you can perform a manual checkpoint to move stuff to the main DB file from the WAL file. Here’s more on how checkpoints work:

BACKUP THE SQLITE DIRECTORY FIRST !!!

cd /path/to/your/n8n/data
sqlite3 database.sqlite

Inside sqlite3 shell:

PRAGMA wal_checkpoint(TRUNCATE);
PRAGMA vacuum;
.quit

Hope this helps, feel free to mark it as Solution if it did :slight_smile:

Thanks. What makes this a PITA is that the n8n docker image doesn’t contain the sqlite3 binary.

I had to stop the n8n container, then mount the data dir to another container with sqlite:

docker run --rm -it -v “./n8narr/data:/workspace” -w /workspace keinos/sqlite3 sqlite3 /workspace/database.sqlite

I was then able to execute the commands, and the database.sqlite-wal file is down to 0 bytes!

The question is though, why doesn’t n8n do this automatically (especially with DB_SQLITE_VACUUM_ON_STARTUP=true)? Is this a bug, or just sloppy database management?

Good one with the docker workaround!

Looking through the code, I’m pretty sure n8n’s using the default SQLITE_DEFAULT_WAL_AUTOCHECKPOINT which is 1000 pages or around 4mb. You exceeding this means the auto checkpoints are failing to run, probably due to n8n being up for a long time with active workflows and no idle time to perform a checkpoint. It might succeed if you stop all read/write activity, if we assume it works OK.

Anyways, for production it’s not really recommended to use SQLite as it often has corruption issues which are too high of a risk, so most of the times switching to Postgres is the way to go.

Btw, would appreciate if you mark my previous response as Solution if possible :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.