N8n creates everygrowing files. How to delete unnecessary data via cronjob?

Is there a possibility to clean up as much space as possible via a command like docker system prune --all to delete everything from the N8n logs/database that is not needed?

Currently I have an N8n instance within Docker with 10 Graphs and 4 API Connections with lots of executions (some graphs execute every minute). And the total storage need currently is 1 Gigabyte and growing daily by 50 Mb.
I already deleted the execution history, but that doesn’t shrink the file size down.

Yes, that is sadly how SQLite works. You can find some information about it here where people reported about the same:

So the file should now not grow anymore as the free space gets simply reused. If it is important for you that the file gets smaller, you can try to do the vacuum as mentioned in the posts.

Apart from that would I recommend to use Postgres for production use instead of SQLite. Also is it best to only save executions that did “fail” instead of always all. That can be set either as default as described here:

or on a per Workflow-Basis in its settings.

Hope that helps!