Hard disk full - where to delete logs?

Hi - great app, thank you all!

My hard drive is full and I can no longer save my workflow. I’m assuming it’s the buildup of n8n logs, since everything was working well for a while. Can you please point me to where the logs are saved, so I can delete some?

If it matters, I installed using npm, not docker.

Thank you so much!

Welcome to the community @Byron and great to hear that you enjoy n8n!

Yes, it sounds like it. There are no log-files. The logs get saved in a database. So you would have to delete them there. The easiest way to do that and to also make sure that it does not get that far again in the future is to set some environment variables so that n8n automatically deletes old executions:

// Activates automatic data pruning
export EXECUTIONS_DATA_PRUNE=true
// Number of hours after execution when they should be deleted, by default 14 days
export EXECUTIONS_DATA_MAX_AGE=336

So if you set them and then restart n8n it should on startup run the first prune.

If you run n8n with SQLite, you will however see is that the disc-space will not be freed up. The reason for that is that SQLite does sadly not do that automatically. Therefore you have to VACUUM it manually. An example, how to do that can be found here:

The SQLite file you will find under this location: ~/.n8n/database.sqlite

1 Like

Thanks Jan for the response!

I couldn’t get the SQLite VACUUM to work. I was able to resolve by increasing the size of my EC2 EBS Volume though.

Would you generally just recommend using Postgres or some other database that doesn’t have this issue then, if the VACUUM needs to be manual anyway? Perhaps an external database?

Running with SQLite is totally fine. If you make sure that old executions get cleaned up automatically (as described above) you should not have that problem again as it then simply reuses the freed up space in the file. The only thing is really that the file does not reduce in size until you run vacuum.

@jan if running n8n on docker, how to make old executions get cleaned up automatically?

If you work with Mysql, data is stored in a table.

Perhaps this workflow would be useful to prune executions history from a workflow instead of restarting n8n:

1 Like

Thanks a lot @Miquel_Colomer!

You can also set up n8n to automatically delete old executions as documented here:

If you are using SQLite also an interesting GitHub issue here:

1 Like

Actually just saw that it got already mentioned above. So do then not really understand the question I guess.

Thank you, so I install n8n by default docker follow instruction https://docs.n8n.io/#/server-setup and database is SQLite. How can I change to Mysql?

@jan I’ve install n8n by default docker follow instruction https://docs.n8n.io/#/server-setup, so seem this instruction https://docs.n8n.io/reference/configuration.html#prune-data not working, I’ve type 2 command to server:
export EXECUTIONS_DATA_PRUNE=true

export EXECUTIONS_DATA_MAX_AGE=336

And restart server, but my database increase day by day! And executions over 14 days still exist.

If you run n8n inside of Docker and you set the environment variables on the server instead that will not work. Docker handles everything totally independently. You have to add them to the environment section of the docker-compose.yml file. So something like:

...
    environment:
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=336
...

That will automatically delete old executions older than 336 h. But as mentioned in the above GitHub issue will the file size of the database-file not shrink unless you manually vacuum it.

1 Like

@jan Thank you so much, add command to the environment section of the docker-compose.yml file is the solution, I can Vacuum my sqlite database now.

Perfect! Now it should never grow that large again.

Have fun!

1 Like