Postgres Connection Error - "sorry, too many clients already"

Description

I have a multi-agent workflow. I am using PGSQL for both vector DB and storing chat history as well. In my sample workflow, I am using the same PG DB to generate SQL.
In credential I have create a Postgres Account and I have been able to connect to it. But there is not parameter there where I can mention the connection pool configurations.

What is the error message (if any)?

It was running fine for few rounds, but I see the following error sometimes.
If I restart the DB, the error goes away for sometime.

When researching further, I feel that n8n is opening a new connection every time it is trying to access the DB.

In my opinion, in a multi-agent workflow where the same DB when accessed across different steps it is opening a new connection rather than utilising the same connection or use the connection from the connection pool.
Ideally it should create a connection pool for each item in credentials section and reutilise wherever possible.
NOTE: This is my assumption, and I may be wrong

Please share your workflow

This is the master workflow

This is the child workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.67.1
  • Running n8n via : Local Docker version
  • Operating system: Windows
1 Like

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

EXECUTIONS_PROCESS settings: default
Database: PGSQL

Do you have a lot of executions on your n8n instance that are talking to this same postgres server?

Yes - It is a multi-agent workflow. So yes, based on the input request there may be multiple queries that may be used to query the chat-table.
The question here is that why the workflow is initiating multiple connections, it should first create a connection pool. The workflow should get the connections from the connection pool.

I’ve been having the same problem ever since I started to use the multi-agent workflow.

1 Like

I’m encountering the same error. There’s indeed a bug in the Postgres node, it opens a new connection each time.

Evidence:

1 Like

@Cristopher_Moreno I found it as well. As soon as I reboot the Postgres container, everything starts working again, but that’s not a solution.

We need to fix connection pooling in the postgres node to support reusing the pool consistently across all executions that need that same postgres connection.
I’ve created an internal ticket, and we’ll try to prioritize this soon.

2 Likes

Great to hear you are addressing it, any ETA @netroy? This is a real blocker, unless I miss something. I have “ingestion” workflow that uses PGVector Vector Store in a loop to Insert Documents, and I hit this error after few dozens of operations.

Unfortunately I can’t offer any ETA on this, especially around the holiday season, as many folks are unavailable.
If this is urgent for you, a workaround could be to setup more workers with lower concurrency on the n8n side, and maybe setup a pgbouncer before the postgres.
Hopefully we’ll have an ETA in the first week of January.

Hello
I also ran into the same issue and understand how frustrating it can be. I shared a PR that resolves connection loss issues in the PostgreSQL pool, specifically in the VectorStorePGVector node. This solution ensures that connections are released correctly, avoiding blockages and workflow issues.
I hope this solution can be useful

1 Like

We fixed this here last week, and it’ll be included in the next release, scheduled for Jan 8th.

1 Like

@netroy I am 1.75.2 self hosted and I still see this error from the VectorStorePGVector node.

  • n8n version: 1.75.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu

PGVector node uses a bit of external code that I had not tested when we fixed this. This will be fixed in #12723, and hopefully included in 1.77.0

is it possible I’m facing the same issues but on the regular postgres node, not on the PGVector node?

Even on 1.77.0 ? can you share more details? like the error message that you are seeing? did you configure the number of connections in the postgres credential?

Checked my n8n cloud version and was still on 1.70.xx, did my update to the latest 1.77.x version right now. Checked the change log on git and tons of changes has been made. <3

After upgrade: postgres is going way faster!

1 Like

I ended up placing an executeSQL node directly after the pgvector store node with the following:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = ‘idle’
AND pid <> pg_backend_pid();

This resets all the idle connections prior to running the next insertion. It’s a temporary fix until corrected in n8n.

I highly recommend against this temporary fix for anyone using n8n in production.
This will likely cause more issues when you have concurrent executions, since a connection could go idle during the lifecycle on an execution, and it you force terminate it, that execution might fail on the next postgres node.

If you are seeing any issues like these on versions above 1.77, please let us know, and we’ll try our best to fix them.
If you are on versions below 1.77, please upgrade :pray: