Postgresql ram usage analysis

Describe the problem/error/question

Hello, I would like to share my current n8n setup usage from oct 13 to 01 nov, to share insights and ask for tips or recommended configurations.

What is the error message (if any)?

I have a queue setup running in railway where I have a Redis service, a postgresql database, the main n8n instance and 10 workers.

This has been the usage of the whole setup:

As you can see, postgresql is the service that most ram is using by far, x3 times more than the 10 workers together. I know postgresql is a database with a high memory usage, but how efficient is it while running with n8n?

At the beginning, I had these variables:
EXECUTIONS_DATA_PRUNE_MAX_COUNT=200000
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=672

But the ram usage was growing and growing and had to re-think the numbers and finally ended up with these values:
EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168

Seems the biggest tables are: “execution_data” and “execution_entity” which right now each one has +9k rows.

I still have tons of questions about how n8n manages this data.

  • I thought the size of the tables would occupy disk, which they do as well, but where the ram usage is coming from? Stored executions? Or the usage it directly proportional to database size?
  • Does n8n automatically “free” the ram usage when pruning happens?
  • A week ago I deleted 50k executions from my n8n executions list using a date-range of executions that I don’t need anymore. It had 0 impact on the ram usage, why?
  • Should I run a command every few weeks/once a month to “vacuum” somehow the database? It only frees “disk space” or Ram as well?
  • Does n8n vacuum the database automatically when the pruning happens?
  • What happens when the count is 50k, it overwrites the oldest entry or just deletes an entry? How does it work?
  • Each entry in “execution_data” and “execution_entity” it’s an execution from the dashboard? I have 50k, why I only see 9k rows in the database?

Last 7 days of postgresql ram usage:

Also, these logs are triggered by what? The pruning variables?

How often it happen? Can I manually trigger it running a command in n8n CLI or is there any workflow that triggers this?

Thanks!

Information on your n8n setup

  • n8n version: 1.11.2
  • Database (default: SQLite): Postgresql
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Railway
  • Operating system: Railway+docker-image

Hi @yukyo :wave: Maybe @krynble can help you out with this one? :slight_smile:

Hey @yukyo,

Postgres should be pretty good for running n8n with, I think your questions are more suited to the Postgres community and are around how Postgres itself works rather than how we actually interact with it for example n8n has no control over the memory of postgres itself we just connect and use the standard queries.

To quickly go over your questions and add some n8n context though (I have added N/A for things out of scope for us)…

  1. N/A, the data will be on disk but it could be that Postgres caches it in memory.
  2. N/A, When we prune data it is just a standard delete query.
  3. N/A, I am actually not sure why it didn’t have an impact maybe it is still cached in a log
  4. N/A, A vacuum would only free disk space, A restart may help free up RAM though.
  5. No n8n doesn’t run a vacuum on Postgres but from what I have seen it is rarely needed.
  6. When the count hits 50 we delete the oldest when adding new ones
  7. This one is a bit odd, If you find some older ones in the n8n interface and try to load them do they display any data? I know we introduced soft deletes a while back so it could be related to that.

Do you know Postgres is configured on Railway and do you have any options you can tweak there? This is one of those issues where it is going to come down to the knowledge you have of the systems you are using but I am sure we can help you find what is needed.

Hey @Jon thanks for the help and responding all my questions

My dashboard says I have 105k executions, and my postgresql database aswell:

Can I safely delete them from my postgres db? I really don’t like to manually delete rows…

I have manually deleted all the rows from “execution_data” and “execution_entity” using the command DELETE FROM x; and now the dashboard looks OK:

And also ram usage dropped dramatically:

1 Like

Hello @yukyo

Ram usage is not necessary an indication of a fault - in fact it’s common that software seems to be using a lot of memory because of page caching.

That’s why it’s probably going to grow steadily since it’s available; this is usually a fine behavior and the operating system should handle this without any problems.

Unless you start seeing errors such as transaction failures, executions crashing or the system running out of memory this should not be a big deal.

I hope these help.

3 Likes