Hi all, I am new to n8n, I have hosted n8n via docker in my server and I have done many webhooks and tool calling agents to give me specific results via chat api. I have many data tables for user data caching and my n8n environment is running in production.
Regarding backup I used to put the entire image into my gitlab registry. Then I run a script in my server with make a backup.tar of my n8n’s volumes data and post it in my slack channel.
Day by day my volume data is growing very large even though I have cleared all the execution logs.
Can anyone help me with how to set up perfect back up structure to n8n prod? am I doing it correct or any suggestions and help I need for this.
Hi @seldhos Welcome! I recommend going with Hostinger VPS they save backup everything for you and maybe asks 6$ for that and it would save you a lot of headache of server side scripts and all, just use hostinger VPS it is cheaper and friendly to use and it has this backup feature too:
You can develop an n8n workflow which would;
Be on schedule (6 hours)
Fetch all the workflows using the n8n API node
Compare with current json files within your GitLab repo
Only commit when changes are detected (this will avoid too much noisey commits)
Save each workflow as a json versioned file
Add these environment variables to your docker-compose.yml
environment:
# Save only what matters
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
- EXECUTIONS_DATA_SAVE_ON_PROGRESS=false
# Auto-prune old executions
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168 # 7 days
# CRITICAL for SQLite: Actually shrink the DB file
- DB_SQLITE_VACUUM_ON_STARTUP=true
With your workflows in Git and DB pruned, your volume backup size greatly reduced:
# Stop container, then backup only the mounted volume
tar -czf n8n_backup_$(date +%Y%m%d).tar.gz /path/to/n8n-data/
Hey welcome! The growing volume is almost certainly SQLite not freeing space even after you delete executions, add DB_SQLITE_VACUUM_ON_STARTUP=true to your docker env and that should help a ton. Also instead of tarring the whole volume i would use n8n export:workflow --all and n8n export:credentials --all from the CLI, way lighter than a full volume backup. Long term Id move to PostgreSQL for production honestly.