I have a N8n cloud subscription, but created a self-hosted version in able to change the process mode so it could react under 3sec to Twitter’s CRC.
Now I’m not an expert on digital ocean and all that, but I’d like to clear some storage. Currently it is using 7GB, and I dont believe it has to be that much. How can I clear this up?
What I’ve done so far;
docker system prune (no result)
changed docker file and added several parameters;
(duration of storing executions for 72hrs, data prun, and vacuum - see image below)
rebooted several times (system as well as docker compose up/down)
I’ve tried to delete the executions (56k of them), but it gives me a 502 error (too many to delete?)
There seems to be no change in disk usage. Is there anything else I can do? Like, selecting a folder and just delete it?
Thanks!
Information on your n8n setup
**n8n version: 0.0.194.0
**Database you’re using (default: SQLite): no idea, I use docker
**Running n8n with the execution process [own(default), main]: main
**Running n8n via [Docker, npm, n8n.cloud, desktop app]: docker
I would maybe start with updating your n8n version, Have you also checked where the data is just to confirm it is the database and not some older docker images hanging around?
@bees8 as Jon suggested, please update your n8n version to the latest, and then follow these steps to clear out the old executions:
create a simple workflow with just a manual trigger.
execute this workflow. if you are on the latest version of n8n, at the end of this workflow execution, older executions will be pruned. (We are working on moving execution pruning to a scheduled task, so this should be automatic in the future.)
thanks @Jon@netroy
I have updated n8n, and followee the steps from netroy - this worked!
When logging in to SSH it still tells me I’ve got used the same amount of disk space. I’ll keep an eye on it whether it gets updated later on, or maybe something else is going on.
Thanks!