Cant VACUUM database. Disk Full

Describe the issue/error/question

My disk (60gb) is almost full. Doing a bit of research I found that the SQLite database is over 50gb.
I added the Prune executions variables, but reading this community I learned that to free the database disk space I need to perform a VACUUM. Tried but it couldn’t be done because there isn’t enough disk space to do so.

I’m kinda afraid that the disk may be reach full capacity and block executions or something.

Is there a way to execute a VACUUM with the disk almost full? I’m not an expert and afraid of breaking everything trying to solve myself…

What is the error message (if any)?

Please share the workflow

(Select the nodes and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow respectively)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 0.213.0
  • Database you’re using (default: SQLite): SQLite
  • Running n8n with the execution process [own(default), main]: own
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: docker

Hey @Fernando_Arata,

Sadly the way vacuum works is to make a copy of the data then to rebuild the database, You can find more information about this process here: VACUUM

One thing you could do if you still have some space would be to use the CLI to export your workflows and credentials then you can delete the current database / rename it then start up n8n and import the workflows and credentials again which will put them into a new database.

Assuming all is good from this point you can then delete the old database and you should be good to go. If you did want to vaccum the current database you could look at downloading it and doing the vacuum locally then uploading it.

Doing a external Vacuum, is it ok to just replace the database file after done?
Regarding the filename, is there something else I should look for?

Hey @Fernando_Arata,

Yeah you can just replace it once finished so I would stop n8n then do the vacuum and replace the file then start it up again. The file will be called database.sqlite and will be in your data path somewhere depending on what options you set for your container.

Great!
Will try tonight.

Much appreciated @Jon

1 Like

Found out that “executions_entity” table was corrupted.

My database grew up to 45gb after some huge manual data execution into a workflow. Some executions froze into my browser and probbably in the application itself. I think one of them locked into a loop and that execution(s) just grew until the filesize made it crash. That crash made the table corrupt (n8n were still working fine).

TLDR: I duplicated the original database excluding the “executions_entity” table and exported the corrputed table without it’s data (columns, indexes, etc). The filesize went from 45GB to 4MB (!!!). Changed the old database filename to “database-old.sqlite” and uploaded the new one with the correct name. Worked like a charm. Lost the executions and some workflows modifications made yesterday, but nothing to worry about.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.