Hey, thanks for the explanation – makes sense in theory
Just to give some context on what we actually tried and where things went wrong:
We are currently using SQLite. Based on similar advice, we first tried to optimize SQLite via environment variables (execution pruning, reducing stored execution data, etc.), not switching to Postgres.
After adding the SQLite-related env vars and restarting via docker compose down / up, n8n became completely inaccessible and only returned {"code":503,"message":"Database is not ready!"}.
Logs showed repeated:
Database connection timed out
At that point, even removing the env vars again didn’t help – n8n still wouldn’t start properly.
What finally worked was:
stopping n8n
temporarily renaming the SQLite files (database.sqlite, -wal, -shm)
starting n8n (which created a fresh DB)
then stopping it again and restoring the original SQLite files
After that, n8n booted normally again and all workflows/credentials were back.
So two questions where I’d really appreciate your input:
Is this a known behavior that SQLite can get into a “locked / unrecoverable” state just from config changes + concurrent executions?
Is there a safe, documented way to apply SQLite-related env changes (pruning, pool size, etc.) without risking a full DB lock like this?
I’m aware Postgres is the long-term solution, but for now I’d like to make SQLite as stable as possible without breaking the instance again.