5 days of work lost even with backups?

Describe the problem/error/question

While editing a workflow in N8N, I received an error: “You can’t save, because you don’t own this workflow. Ask the author for permission.” Shortly after, I discovered all workflows had reverted to their state from August 31. Execution history and credentials created since then were also missing.

Steps Taken

  1. Verified the issue in a new browser tab.

  2. Restored ~/.n8n from a 9/2/2025 backup on my TrueNAS server (I run hourly backups).

  3. Restarted N8N.

Actual Result

  • Workflows, executions, and credentials are still missing/reverted to older versions.

  • Data between Aug 31 and Sept 6 is unrecoverable, even after restore.

Expected Result
Restoring from backup should have returned the workflows, execution history, and credentials as of 9/2/2025.

Request

  • Am I missing steps in the restore process?

  • Is there a specific directory or database that must be restored in addition to ~/.n8n?

  • Since I’m preparing to deploy N8N at work for multiple groups, I need to understand the correct backup/restore procedure to avoid data loss in production.

What is the error message (if any)?

“You can’t save, because you don’t own this workflow. Ask the author for permission.”

Information on your n8n setup

  • n8n version: 1.109.2
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): NPM 11.6.0
  • Operating system: Debian 12 Bookworm

I pulled the sql lite file and there was a journal file with it, so I suspect this is an sql lite issue. I tried normal recovery methods, but was unsuccessful.

Had issues with with workflows reverting to default, which I also think was SQL issue; I moved to Postgres and backup my workflows to github daily.

I’ve also seen a few recommendations to use Postgres in production for it’s robustness

Thanks, I moved to Postgres and created a workflow to write them to the disk every hour right before my os backup starts. I am hoping that will prevent these issues in the future. However, I now a bit worried. My team finally got approval to start using N8N to see if it could resolve some business needs and I am currently installing it onprem right now. I hope it proves more reliable with postgres as I don’t think it would survive if it just decides to jump off a cliff at work.

You can also check this to backup credentials:

it only happened to me twice, and it was during a upgrade or a backup – this time around I believe the backup to github was the issue I saw spike in CPU and something happened for which I now have a loop so it can stagger the node backups rathter than doing all at once. It it happens review carefully what happened druing that time, with executiions and what was running on the server.