Schedule Node Crashed SQL3/VPS - need help preventing total nuke

Describe the problem/error/question

:: I had a 5 second schedule trigger node repeating and it bloated my SQL so bad I had zero memory, no room to move. I was able to secure the one workflow I had managed to stack together over a number of hours through workflow/SSH errors. But all my credentials and history are all gone.

And there was no warning at all. I woke up from a nap and even my VPS logins were suffering (no room to store IP-pins). If this happened like…2 months into development, I’d be crushed. Like, I might as well have a virus eat all my work.

Is there a way to prevent and/or restore major issues like this? I hate having to nuke everything.

Thanks for any suggestions.

What is the error message (if any)?

:: N/A

Please share your workflow

:: I can’t at the moment, everything was erased.

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi, Soupking.

  1. Reduce Trigger Frequency:

Action: Adjust the schedule node interval to at least 1 minute (if critical).

// Example in Schedule Trigger (set "minutesInterval": X)
"schedule": { "interval": { "minutes": 5 }, "timezone": ".../..." }
  1. Enable Automatic Data Pruning:
    Add these variables to docker-compose.yml
environment:
  - EXECUTIONS_DATA_PRUNE=true
  - EXECUTIONS_DATA_MAX_AGE=72  # Data kept for 72h (adjust as needed)
  - DB_SQLITE_VACUUM_ON_STARTUP=true  # Reduces SQLite file size

Restart the container

docker-compose down && docker-compose up -d
  1. Monitor Disk Space:
    Add a monitoring script to your server via cron
# Check free space hourly
0 * * * * df -h /path/to/docker/volume | mail -s "Disk Monitoring" [email protected]
  1. Automate SQLite Backups:
    Create a script (backup_n8n.sh)
#!/bin/bash
docker exec -u root YOUR_CONTAINER_ID sqlite3 /home/node/.n8n/database.sqlite ".backup /backups/n8n_$(date +\%Y\%m\%d).sqlite"

Schedule via cron (daily)

0 2 * * * /path/to/backup_n8n.sh

Restoration (Emergency Steps)

  1. Free Disk Space
# Identify large files
sudo du -h / | grep 'G\>' | sort -nr
# Delete temp files/logs (caution advised!)
sudo find /var/log -type f -name "*.log*" -exec rm -f {} \;
  1. Restore Database (if backup exists)
docker cp /backups/n8n_20250423.sqlite YOUR_CONTAINER_ID:/home/node/.n8n/database.sqlite
docker restart YOUR_CONTAINER_ID
  1. Rebuild Environment (no backup):

Start fresh: Delete database.sqlite and restart N8N.
Re-enter credentials manually (no auto-recovery without backup)

SQLite-Specific Fixes
Manual Database Compression

docker exec -u root YOUR_CONTAINER_ID sqlite3 /home/node/.n8n/database.sqlite "VACUUM;"

Switch to PostgreSQL for Scalability

# docker-compose.yml (minimal example)
environment:
  - DB_TYPE=postgresdb
  - DB_POSTGRESDB_HOST=postgres
  - DB_POSTGRESDB_USER=n8n
  - DB_POSTGRESDB_PASSWORD=your_password

Post-Restoration Flow
Reduce triggers → 2. Enable pruning/vacuum → 3. Monitor disk → 4. Daily backups.
Impact: Eliminates 95% of disk saturation risks

I hope I have helped in some way.

Big hug.

1 Like

Ah! This is excellent. Thank you so much!

I’m not currently running Docker and I don’t know what PostgreSQL is. But that’s okay, I’m brand new to mysql3.

I’m small potatoes (PSDTOPNG, Google Drive, etc) so I’ve been doing all my server/services setup manually w/o Docker. It stemmed from doing things locally at first as Docker just wouldn’t run for some reason and debugging everything ChatGPT-assisted was just becoming too…redundant (sparring vulgarity lol)

I’ll most definitely look into these preventative measures and updating to PostgreSQL since I’m brand new and don’t have to worry about any real migration issues.

I’m also going to have to arrange some kind of timestamp mirroring of sorts as well just to make sure that jumping ship isn’t a tragedy.

Thanks again, Interss!! :smile:

Hello, I kindly ask you to mark my previous post as the solution (blue box with check mark) so that this ongoing discussion does not distract others who want to find out the answer to the original question. Thanks

You bet, thanks!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.