Server Crash when viewing Execution History

We’ve deployed N8N against a Aurora Postgres database. Whenever we check the running executions from the N8N interface, the query times out and the application crashes.

Application tracing shows no error message. I suspect the query to return executions is too large. Is there a way to limit the number or returned rows?

  • **n8n version: 0.151.0
  • **Database you’re using (default: SQLite): Aurora Serverless (Postgres)
  • **Running n8n with the execution process [own(default), main]: main
  • **Running n8n via [Docker, npm, n8n.cloud, desktop app]: kubernetes

Welcome to the community @ajbot!

The number of returned rows is already limited. So that the returned data is too large should not be the problem. As I did never hear of such a problem ever before I do assume that maybe Aurora Serverless (Postgres) is the problem. I would try to replace it with a regular Postgres database or SQLite.

Thanks, I will try that.

After doing more digging, the execution_entity table size was 151gb and the “count” query was timing out. It’s similar to the issue reported here: Executions list doesn't scale · Issue #1578 · n8n-io/n8n · GitHub

The easiest route for us was to truncate the table until we can get around to migrating to a higher version of PG. Would it be possible to request a feature to limit the number of days that execution history is stored?

That is already possible with the environment variables (EXECUTIONS_DATA_PRUNE and EXECUTIONS_DATA_MAX_AGE).
You can find more information in the documentation here:

1 Like