After fixing SQL Lite error: `SQLITE_FULL: database or disk is full. btw shoutout to @Jon and @AlGryt because only this method worked for me Sqlite cleanup (prune + vacuum.
After the fix I’ve added the following values to my docker-compose.yml file to prevent similar situations in the future.
Anyway, when I was going around my server I saw two files/ folders that are huge 5GB each and it does not seem normal but I’ reluctant to just delete them.
Perhaps someone knows if they are needed and if they are not needed, can suggest how to delete them and how to prevent something like that happening again?
After having an extensive conversation with Claude, here is what the AI told me.
‘’ I found that while df -h showed both overlay mounts as 5.4GB each, this was misleading - it was showing the total filesystem size available to Docker, not what each container actually uses.
When I inspected the containers using:
bash
Copy Code
docker inspect | grep -A 3 MergedDir
I could identify exactly which overlay directory belonged to which container. Then using du -h on those directories, I discovered my n8n container actually uses 1.1GB (mostly Node.js modules) and Traefik only 184MB.
The docker system df -v command revealed more details - my total Docker usage is 4.9GB, with 2.677GB in the n8n_data volume and 2.4GB in overlay2.
So those large overlay directories are completely normal and just part of how Docker’s storage works - as you said, they’re what the containers see as their filesystems.‘’
What’s concerning me is that when I do a snapshot of the server the total size its 10+GB, which seems a bit unusual to me to be honest.
its not a huge issue right now, but I want to prevent it from becoming an issue in the future.
This depends on your workflows and with how data you are dealing.
I think thatEXECUTIONS_DATA_SAVE_ON_PROGRESS saves the data of each node of an executed workflow. Dealing with binary files or many items in a single workflow could cause this issue. If you don’t need it, just try remove it or set it to false.