Workflow & Database Error

Hello,
I have a problem in understanding workflows. I get an error that my sqlite database is full. I have located it in n8n/database.sqlite
I have RaspberryPi3 and I run n8n on it. Database.sqlite file has somewhere around 24GB. I have already found some similar issues in this forum but my problem is I don’t understand how to resolve the issue. It is because I have outsourced a programmer to create this workflows for me but I don’t have any way of contacting that person. Now I have started to explore how and where the issue is and how to resolve it but I need someone to help me step by step if its possible :slight_smile:

I want to clear data in this database so workflows can run in background. I also have VNC server installed so someone can manually connect while I watch. I want to learn to do this by myself as well
Thanks!!

Hey @Leon,

This one comes up a bunch, what you need to do is set up the environment options to clear out the old data and run the vacuum options. This post covers a lot of it: Why is my database so big?

To make life easier you can turn off the execution logging and only keep the logs for failed runs or export the workflows you have and move to an external database like Postgres.

1 Like

So in the same folder as database.sqlite file i created .env file and I put in
DB_SQLITE_VACUUM_ON_STARTUP=true
EXECUTIONS_DATA_PRUNE=true
Then I started n8n process and it worked but database is still 23.1GB size.
Can you maybe help bcz I am lost lmao. I have email IMAP node which checks for new e-mails and another workflow that runs every 5 minute.
Thanks in advance :smiley:

Hey @Leon,

Try adding the options below, they are what I use on my test instance that uses the sqlite option. It will only enable logging for errored workflows and workflows in progress it will also only keep the last 30 days as no one really needs much more than that :smiley:

- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
- EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
- EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=30
- DB_SQLITE_VACUUM_ON_STARTUP=true
1 Like


Ok. So I did this like in a photo. → .env file is in the same folder as database.sqlite
But the n8n workflow I start with command from terminal is not in that folder. SO after I typed in these commands in txt file and restarted the workflow database still has same size hmmmm

LMK if you see where I failed :frowning:

The .env file will not help you if you do not run n8n with docker. Also should it be in this case not in the ~/.n8n folder, rather in the folder you have your “docker-compose.yml” and you run the docker-compose up -d run command in.

How do you start n8n right now? Because depending on that, will you need to set environment variables very differently.

I open terminal.
type in /npm-global/lib/node_modules/n8n/bin/n8n start and it starts all workflows and I can open in browser. That’s all I have to do.

Like I said I got my RPI from that programmer I outsourced and he told me that’s all I have to do but now I see he did a lousy job so I am starting to figure it out by myself.
I will look now for that docker file


Managed to find the file. On the right side in browser section is the path.
And the path for starting n8n is in upper reply.

Should I edit the docker file → environment part with all the commands that @Jon said?
Thanks

Yes, that is for sure not the best way to run n8n. I would recommend using docker-compose. That will then also ensure that if n8n crashes it automatically restarts. Additionally is upgrading to a new version very simple.

An alternative could also be using pm2 in case user Docker is not possible. A blog post about that can be found here.

There is sadly much more work involved. You have to install Docker and docker -compose first, change that file according to your needs (for some reason the one you show uses MongoDB which is not supported anymore since a long time), change the rest of the .env file, and then you can start n8n. But even then will it still not be perfect as for the most trigger nodes would require an own domain with SSL and so an actual Webserver.

1 Like

Hey again, So I decided to install Docker and docker-compose with help of internet tutorials. However I cannot install it due to lack of space on my RPi because database.sqlite file takes up too much space. I have installed DB Browser so I can access that file and manually delete all data from executions_entity table but when I select it DB Browser crashes. I tried deleting all executions via localhost on Chromium where I can see my workflows but that didn’t do anything to my Database size but now DB Browser doesn’t crash when I browse data in the app…
Can you lead me through?

Edit: I made it. When clearing the Executions from localhost manually and opening the database with DB Browser. Going over to Execute SQL tab and typing VACUUM;
It cleared out all 23GBs.

I understand this probably isn’t a permanent solution but I wanted to let you know :slight_smile:
Looking forward to learning more about n8n.

2 Likes

The options I sent over before should help keep the SQLite size fairly small moving forwards for you.

I am aware but unfortunately I don’t have the knowledge yet to implement all those formulas. Therefore I shared my solution atm and will learn more on how to actually execute the commands and everything you told me. I am a newbie

Thanks for the help jon. I will mark your answer as a solution as well since I have found on many other Topics that what you posted is actual solution