Postgres Ram Usage

Postgres RAM usage and max_connections configuration

Information on your n8n setup

  • n8n version:1.51.2
  • Database (default: SQLite):Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):Docker (Railway.app)
  • Operating system:Ubuntu?

First of all, I thank all of n8n team, community and everyone who helped in this amazing project. I have been having a blast using it and hope to keep using it for decades.

So my question is really simple. I have deployed n8n on railway.app with 1 primary, 1 postgres, 1 redis and 1 worker only. I dont have (not yet) any huge workflows in it and they are pretty basic stuff like create tickets in zendesk, use lambda function, upload files to slack etc in few different workflows which are triggered few times a day. However, I was building a workflow to download around 500 images from google drive and realized how the ram usage in postgres quickly rose and is still sitting at around 1GB (railway charges pay per minute). normally its at 150MB. So i looked around to optimize this and found some related questions on stackov3rflow and how to optimize this using environment variables like shared_buffers, temp_buffers, work_mem, maintenance_work_mem, max_connections etc. using the formula they gave which was:

Actual max RAM = shared_buffers + (temp_buffers + work_mem) * max_connections + maintenance_work_mem * autovacuum_max_workers

I figured that tuning the max_connections to whatever n8n requires would be best for handling it. I only want it to be able to handle large stuff which could potentially take around 2GB in postgres ram but then clear RAM when done with it. i want no caching for speed boost since it does not effect workflow execution efficiency. So how much should I set max_connections to or other stuff to make this as efficient as possible? setting to 2 since i have only 2 nodes (1 primary and 1 worker) accessing it a good idea?

This question relates to @yukyo 's question which was Postgresql ram usage analysis . Rather than deleting the execution data is not it a good thing to just adjust caching variables to improve RAM usage?

Link to postgres question on stackov3rflow:

Hey @Ahmad_Rajput,

This doesn’t really sound like an n8n issue and is more of a database management question, It could be worth asking in a postgres community but looking at what you have posted and what you linked to I would expect the memory usage to be higher as you use n8n.

n8n will typically use as many connections as it needs so you could try limiting it to 5 to start and see if you get any errors and tweak from there but there won’t be any magic number to work it out and it will come down to testing your specific tasks.

I would be interested to see what your results are, on my home instance running postgres I can see the postgres memory is 280mb cache and 42mb memory, This is with 3 workers, 1 main instance, 300+ workflows and 60 active with around 3 running every minute.

1 Like

Hi @Jon ,

Thank you so much for your reply. I did thought of asking in postgres community but then realized its a matter of how n8n manages the data in postgres and how much connections it requires to the postgres DB.

Note 1: I am a pretty average developer with simple know-how and NOT a super nerdy person. So my knowledge is limited

So far the below points are usually where the RAM spikes the most. I have tested it in a dev server (I keep dev and production seperate to handle breaking changes and trying out new stuff):

  1. Whenever a large execution is executed with a lot of execution data (eg downloading/uploading to Drive)
  2. All Executions or a single large execution data is opened in the in the logs.
  3. Execution data is fetched using n8n node → Get many execution → Set include execution details to True.

What i believe is whenever large data is accessed from postgres, or save in postgres the DB caches it into the RAM. I know that is for speeding up the process for reading and writing data. I was just wondering if it is possible to reduce the RAM usage since we dont really need THAT much fast DB operations? Like reduce the max_connections etc.

My setup is only 1 primary and 1 worker so for other peoples with greater workers number it would increase exponentially. Also the test was a simple get 500 Images (around 350MB) from one folder of drive to another folder of the drive. Here is what i tried and got out of it (cumulative usages):

  1. Default settings: During high loads (ingress and egress was high), it would usually get upto somewhere around ~1.2GB and gradually reduced with in 24H back to ~400MB and remain there for the rest of the week

  2. maintenance_work_mem = 128MB, max_connections = 10, shared_buffers = 256MB, work_mem = 32MB: Would get upto 270MB max and would go down to 150MB within 24H

  3. Last night changed to same as above but changed max_connections = 4: Would get upto 105MB max then back to 60MB gradually.

Note 2: the results atleast to me makes zero sense since the execution data was only ~350MB worth images

No efficiency issues were encountered in this process and the RAM usage were reduced by a lot. I think its safe to assume that reducing the DB settings would decrease RAM usage and reduce costs for running on railway. I also think that it will help with OOM error and server crashing (would become unreachable) in AWS with t3.micro instance (1GB RAM, 2vCPU) which i experienced before switching to railway.

These are all my research that i did and am pretty happy with the results. I might try deploying at AWS again and tweak the variables and see cloud watch for any changes. Thank you again for reading this. I hope it helps someone who is having same issues.

Hey @Ahmad_Rajput,

That sounds about right, To be able to write the data it will pass through memory to get there at some point although I would have expected the reduction to happen fairly quickly but that would be beyond my knowledge of databases I know just enough to be dangerous.

One thing that may help is reducing the amount of logging and as you are using binary data swap that from memory to filesystem in n8n to see if that helps.

I would also recommend taking a look at pgtune it is a very handy tool for generating config files for specific database server builds, it is still very much a case of testing different options as one size doesn’t fit all but it will get you started.

1 Like

Hello @Jon ,

I will mark your answer as solution since it completely slipped my mind that i could simply change config of binary data to use filesystem storage rather than memory. Most of the data occupied in execution is mostly of binary data so memory would be saved in the end. You saved me from the rabbit hole i was going into :sweat_smile:

Also i will look at pgtune once i get the chance. You are truly a godsend. Thanks!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.