I kept seeing the same pattern across servers and homelabs:
Backups “succeeded”… until the day you actually needed them.
Cron ran.
Emails were sent.
The storage was reachable.
But the data was incomplete, interrupted, or silently corrupted.
So I stopped trying to monitor jobs and started monitoring evidence.
n8n Templates - Monitor backup and sync logs with Google Cloud Storage, GitHub, Gmail, OpenAI, and GLPI
The core idea
Instead of asking:
Did the backup run?
the workflow asks:
Is there verifiable proof that the backup completed correctly?
The workflow never connects to servers, never runs rsync/rclone, and never trusts schedulers.
It only validates deterministic execution logs.
High level flow:
- Backup jobs run externally (rsync or rclone)
- Each job produces exactly one structured log
- Logs are uploaded to object storage
- n8n checks presence + completion markers
- Alerts only when evidence is missing or invalid
This turns backup monitoring into an audit problem, not an availability problem.
Why cron/email monitoring fails
Typical monitoring checks:
- process exit code
- cron execution
- mail output
- SSH reachability
All of these confirm the scheduler worked — not the backup.
A killed rsync after 40 minutes?
A network stall?
A partial transfer?
Cron still reports success.
So the workflow enforces a strict contract:
START → TRANSFER_END → SUMMARY → END
If END is missing → the backup is considered failed.
No guessing. No heuristics. No false green.
Architecture (decoupled by design)
The monitoring layer is intentionally blind to infrastructure.
Servers → generate logs
Storage → source of truth
n8n → validator
n8n does not:
- SSH into machines
- execute backups
- require network reachability to hosts
This means you can monitor:
- air-gapped environments
- customer servers
- untrusted machines
- NAT-only networks
The workflow becomes a centralized verification engine.
What makes it interesting in n8n
n8n is not used as a scheduler here — it acts as a state validator.
The workflow:
- builds the expected job list
- calculates the daily prefix
- validates log existence
- optionally parses failures
- opens a ticket / sends alert
So n8n becomes closer to a compliance system than automation.
Use cases where this actually helped
- storage box mounted but stuck
- rclone running but retrying forever
- rsync killed by OOM
- jobs running on the wrong host
- partial backups due to quota exhaustion
All reported “success” by cron.
All detected by log validation.
Why not just parse exit codes remotely?
Because monitoring should not depend on:
- SSH connectivity
- machine availability
- firewall rules
- credentials lifetime
Monitoring must survive infrastructure failure.
Logs in object storage become the neutral ground truth.
What you get
n8n Templates - Monitor backup and sync logs with Google Cloud Storage, GitHub, Gmail, OpenAI, and GLPI
- workflow.json ready to import
- rsync/rclone templates generating compatible logs
- setup documentation
Designed to be boringly deterministic.
If you’ve ever discovered a broken backup weeks later, you already know why this exists.