I recently discovered that my database.sqlite file was something like 24GB large and so I looked up information to find that I had to set the environment variable
DB_SQLITE_VACUUM_ON_STARTUP to true. So I did so and unfortunately I got this error:
n8n | n8n ready on 0.0.0.0, port 5678
n8n | Version: 0.112.0
n8n | (node:7) UnhandledPromiseRejectionWarning: QueryFailedError: SQLITE_FULL: database or disk is full
n8n | at new QueryFailedError (/usr/local/lib/node_modules/n8n/node_modules/typeorm/error/QueryFailedError.js:11:28)
n8n | at Statement.handler (/usr/local/lib/node_modules/n8n/node_modules/typeorm/driver/sqlite/SqliteQueryRunner.js:76:38)
n8n | at Statement.replacement (/usr/local/lib/node_modules/n8n/node_modules/sqlite3/lib/trace.js:25:27)
n8n | (Use `node --trace-warnings ...` to show where the warning was created)
n8n | (node:7) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
n8n | (node:7) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
So I resolved this by temporarily resizing the volume space allocated to the VPS I am using. The error disappeared, but the filesize didn’t change after a long start up period which I assume is due to VACUUM. If this is important, I am using docker and starting it up with docker-compose.
I’m wondering what exactly I’m doing wrong, and if there’s some manual way of removing the logs which have caused the database to balloon to this size. I’ve tried deleting workflows I no longer use and disabling successful executions being saved (some of them were cron jobs that ran every 1 or 5 minutes), but this appears to have no effect. I’m not exactly an expert on sqlite either but should be able to run a few basic commands given any hints for me to figure this out.
Hard disk full - where to delete logs? You can follow instructions in this topic
As I’ve already mentioned, I’ve already set the environment variable for vacuum to automatically trigger at start up, which appears to have done nothing.
It might be worth manually running the SQLite vacuum command to see if it triggers any errors. Did you also see the prune and max age environment variables or just the vacuum option?
If you delete something in SQLite (like an execution) will the space on disk not actually be freed up directly. The file size of the SQLite fill will never go down by itself. So if you delete 1k executions will the file size stay the same but you can now run another 1k executions (obviously only if they have the same size) before the size will increase further, as SQLite will then reuse that space.
What vacuum does is that it “gives back” this space of the already deleted data.
I assume your problem is that you did not delete any data (you deactivated further saving successful runs, but the ones you did already save you did probably not delete, at least you did not mention that). Vacuum by itself will not do anything unless you do actually delete data first that it then can give back. So what you have to do is to delete first the executions. You can do that either manually or as @jon mentioned it is probably best to use the prune functionality (variables are listed here). That will then delete all executions which are older than X hours. So best to set that, start n8n, that should run the initial prune. Then shut it down again and restart, that now the vacuum can do its thing.
Hope that helps!
So previously I only had
true. So I tried setting
EXECUTIONS_DATA_MAX_AGE and shutdown and started up the container–twice in fact-- and my database.sqlite file still remained at 24GB. There were no errors that were listed in the docker logs so unfortunately no clues there.
Afterwards, as suggested, I tried manually running VACUUM. I did a little digging and found this issue: Missing VACUUM of SQLite3 database after deleting execution log · Issue #819 · n8n-io/n8n · GitHub
After installing sqlite3, I opened the database.sqlite file and ran the following commands listed in the issue:
credentials_entity migrations workflow_entity
sqlite> select count(*) from credentials_entity;
sqlite> select count(*) from execution_entity;
sqlite> select count(*) from migrations;
sqlite> select count(*) from workflow_entity;
Now my database is much smaller, down to 1 GB thankfully.
-rw-r--r-- 1 thewiggles thewiggles 1065758720 Aug 17 12:01 database.sqlite
Not exactly sure what happened but at least those entries were marked for deletion and VACUUM did work manually. Hoping that with the environment variables and disabling some of the saved executions that the database won’t balloon to such an absurd size in the future. Thanks for the clues/support!
Thank you for sharing your insights. Interesting that the auto vacuumm did not work properly.
I will investigate it further and see if something can be fixed.
Anyway, happy to know your issue was solved!
I have just had a similar problem and managed to resolve it.
Doing the above commands and a vacuum still did not reduce the size of the database for some reason. I therefore tried the following:
DELETE FROM execution_entity;
This reduced my database.sqlite from 16gb to 108kb! It feels like a nuclear option but all appears to be working.
@jppanchaud that should not be a problem to your database.
Recently @ivov created a PR that should’ve fixed the Vacuum issues since version 0.135.3 (released on Aug. 27).
I hope this works for you all now.