{"code":503,"message":"Database is not ready!"}

Hey, can someone explain me please why I get this error: {“code”:503,“message”:“Database is not ready!”}

I get it lot of times, like 2-3x a day. Mostly when running multiple workflows at the same time.

I upgraded my server but still getting the issue. I thought the server might be overloaded but I guess it isn’t.

Would be great to get some insights. Thank you!

Your database connection is getting overwhelmed when multiple workflows run simultaneously.

Quick fixes:

  1. If using SQLite → Switch to PostgreSQL. SQLite can’t handle concurrent writes properly.

  2. If using PostgreSQL → Increase your connection pool:

   DB_POSTGRESDB_POOL_SIZE=20
  1. Enable execution pruning to prevent database bloat:
   EXECUTIONS_DATA_PRUNE=true
   EXECUTIONS_DATA_MAX_AGE=168

Upgrading server resources alone won’t fix this — it’s a database connection/concurrency issue, not a CPU/RAM problem.

Hey, thanks for the explanation – makes sense in theory :+1:

Just to give some context on what we actually tried and where things went wrong:

We are currently using SQLite. Based on similar advice, we first tried to optimize SQLite via environment variables (execution pruning, reducing stored execution data, etc.), not switching to Postgres.

After adding the SQLite-related env vars and restarting via docker compose down / up, n8n became completely inaccessible and only returned
{"code":503,"message":"Database is not ready!"}.

Logs showed repeated:

Database connection timed out

At that point, even removing the env vars again didn’t help – n8n still wouldn’t start properly.

What finally worked was:

  • stopping n8n

  • temporarily renaming the SQLite files (database.sqlite, -wal, -shm)

  • starting n8n (which created a fresh DB)

  • then stopping it again and restoring the original SQLite files

After that, n8n booted normally again and all workflows/credentials were back.

So two questions where I’d really appreciate your input:

  1. Is this a known behavior that SQLite can get into a “locked / unrecoverable” state just from config changes + concurrent executions?

  2. Is there a safe, documented way to apply SQLite-related env changes (pruning, pool size, etc.) without risking a full DB lock like this?

I’m aware Postgres is the long-term solution, but for now I’d like to make SQLite as stable as possible without breaking the instance again.

Thanks a lot for any insights :folded_hands: