N8n going crazy


We have been using n8n for 8-9 months in a docker container. I reserved 2 cpu and 4 gb ram just for n8n.

For several weeks, we have been encountering problems where the n8n container “goes crazy”: the 2 CPUs at 100% and 98% of the RAM used.

The only way is to restart the container. When the container restarts it happens that it starts again to take 100% of the resources allocated to it.
Obviously, all systems using n8n webhooks are on hold and blocking.

For 8-9 months, the POSTGRESQL n8n database has weighed 1.3 GB, with more than 356,000 rows.

Have you seen this problem before?

I was wondering if this was not related to the large number of rows in the “execution_entity” table.
How can we purge this table?
Can we automatically delete all lines older than 3 months for example?


Look at: https://docs.n8n.io/reference/configuration.html#prune-data to auto prune data.
Also you can disable this data if the workflow runs successful: https://docs.n8n.io/reference/configuration.html#execution-data-error-success


Dont forget to use the sqlite vacuum command, otherwise the filesize will still not be reduced

Thanks a lot @Damian_K but in this case it does not matter as @vanitom is using Postgres and not SQLite as a database.

Ah yes, Completely read over that part

Thank you all for your help.
Thanks Lublak, I had not seen this documentation page. I will make the modifications to test it all.

Reading the documentation, I don’t really see the difference between the configuration: EXECUTIONS_DATA_MAX_AGE and EXECUTIONS_DATA_PRUNE_TIMEOUT.

Do you have more information?

Thank you

Yes, we probably really should improve the documentation.

EXECUTIONS_DATA_MAX_AGE defines after how many hours past executions should be deleted. Default all 672h (so 28 days)
EXECUTIONS_DATA_PRUNE_TIMEOUT defines how often that prune runs max. Default all 3600 seconds (so 2h)

Meaning if you set EXECUTIONS_DATA_PRUNE=true it will by default check max all 2h if there are any executions that are older than 14 days and will delete them.

1 Like

Thanks for those details. It is clearer :slight_smile:

I activated these options last night. My database is still large.
I have the impression that the purge was not done on the existing data before the change of options.

How can I manually delete rows from the “execution_entity” table older than 15 days? If yes, how ?

When you say your database is still large. Do you mean the actual file size or does it still has the over 350k executions saved?

The “execution_entity” table always consists of 350,000 rows.

However, since the configuration changes, I only see the latest errors.
I have the impression that this configuration works well but on the other hand the purge does not work, or rather not as I imagined it with the old recordings.

I have checked the env vars have sent to docker n8n.

I have the impression that the quantity to decrease. We went to 110k lines. It’s better :slight_smile:
On the other hand, the size of the database has not changed. It may be a subject related to POSTGRESL.

The oldest lines are from 25/01/2021 when I used the default value of EXECUTIONS_DATA_MAX_AGE = 672.

Can I do an SQL query to remove the extra rows?

Sorry, my calculation was wrong. 672h is actually 28 days. So if the oldest is from the 25th of January that sounds correct (without now doing the actual math).

About the size. It seems like also Postgres maybe needs a manual Vacuum to actually release the space:

That you would have to do manually. There is nothing built into n8n to do that for you.

Thank you for your help.
I ran the VACUUM FULL command. The database deflated from 1.3GB to 470MB.

1 Like

Great to hear that it helped. Have fun!

No Problem :slight_smile:
If I can help, I will help.

1 Like