POSTGRES ERROR: Connection pool of the database object has been destroyed

ERROR: Connection pool of the database object has been destroyed.

My n8n version, 1.1.1, is showing this error since versions prior to 1.0
Previously I managed to work around the situation by using a ‘set’ node before every Postgres node, but now it’s still intermittently presenting the error.

use in docker and that’s my stack

version: “3.7”

image: postgres:14
- network_public
- 5432:5432
- postgres_data:/var/lib/postgresql/data
mode: replicated
replicas: 1
# - node.role == manager
- node.hostname == bdsender
cpus: “2”
memory: 2048M

external: true
name: postgres_data

external: true
name: network_public

I saw that they believe that the problem has been solved as shown in the topic below, but in the same topic has complaints that same error continues.

Hey @admdiegolima,

That is interesting this should have been resolved in the thread you have seen, The comment at the bottom is a good one around database tuneing which should help resolve the issue.

Can you share more information on the database you are connecting to (is it supabase or something else)? Are you also able to share a workflow that reproduces the issue?

It is not possible to share a stream that would serve as it is on a database node, but, I made a screen recording I hope it can help.

Note in the video, that the error is intermittent, and so far I have not found a pattern.

I can confirm the same happens with my n8n instance. I noticed it happens only in case of parallel queries (inserts) to the PostgresDB.

A connection pool is not the issue I think, based on an n8n behavior. Look at the screen the first request in a batch is succeeded, all others are failed with a connection pool error

My number of DB connections for that time is below the setting

@belyaevsa Which version of n8n are you running at the moment?

1:1:1. the latest available

I am also in version 1.1.1, but this error is prior to the 1.0 update

I tried it with a simplified flow and it keeps happening.

any solution? :confused:

@Jorgelig not yet, Do you have a workflow that can reproduce it and you share the version of n8n you are using and how it is deployed?

I have had a workflow running all weekend that queries the same database twice and I have not yet hit this.

I upgrade last week to Version 1.1.1. I have the same problem when i use the Insert Postgres from N8N.
I have the following case:
I have a external deamon that look every 2 seconds if i have a record in the table.
If i add 2 records with N8N quickly in this table and the deamon, at the same time, read this table, i will receive this message “Erreur: Connection pool of the database object has been destroyed.
Stack : NodeOperationError: Connection pool of the database object has been destroyed.
at parsePostgresError (/usr/lib/node_modules/n8n/node_modules/n8n-nodes-base/nodes/Postgres/v2/helpers/utils.ts:95:9)
at /usr/lib/node_modules/n8n/node_modules/n8n-nodes-base/nodes/Postgres/v2/helpers/utils.ts:226:19
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.router (/usr/lib/node_modules/n8n/node_modules/n8n-nodes-base/nodes/Postgres/v2/actions/router.ts:40:18)
at Workflow.runNode (/usr/lib/node_modules/n8n/node_modules/n8n-workflow/src/Workflow.ts:1253:8)
at /usr/lib/node_modules/n8n/node_modules/n8n-core/src/WorkflowExecute.ts:1024:29”

It works before the 1.0 Version.
If the deamon read the table every 3 second, i have no problem.

Share my flow:

I’ve tried to batch items even 50 at a time and it keeps failing.

Facing same issue with my workflows aswell, issue started in 1.1.1

Hey @Roney_Dsilva,

Can you share a workflow that reproduces this issue? So far I am not able to reproduce it, I am aware that this has been fixed by another user by tweaking the database settings for the usage so it would be worth looking at that as well if it is an option.

Hi @Jon

Doesn’t happen every time though, its happen like for alternate executions
The current flow where am getting this is quite complex, will try to replicate it with a simpler flow and share the same with you

When it happens is it usually with single items like that?

Which version of n8n are you running and is your Postgres instance from a cloud provider or is it a docker image?

HI @Jon
It happens for Single item,
n8n version is 1.1.1
Postgres is Docker image locally deployed in same network as n8n
same of postgres:14

Hi @Jon
I noticed this happening quite frequently when you are trying to perform multiple parallel db actions.

eg: if you have an active workflow running in the background and try to do something else in your current workflow, you get this error.

It sounds like some keep-alive issue that is overloading the connections.

Error: Connection pool of the database object has been destroyed.
    at /usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/connect.js:24:25
    at new Promise (<anonymous>)
    at Object.promise (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/promise-parser.js:30:20)
    at poolConnect (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/connect.js:20:19)
    at Object.pool (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/connect.js:176:24)
    at Database.query (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/database.js:330:36)
    at Database.obj.any (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/database.js:772:30)
    at getTableSchema (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/helpers/utils.js:234:30)
    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/database/update.operation.js:229:62)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.router (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/router.js:49:30)
    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:649:19)
    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:631:53

Maybe this will help.

Also see this,

We had total of 6 executions happening simultaneously, 2 failed, rest succeeded.

Hey @MayurVirkar,

Interesting, We don’t generally keep the connection alive and each item query will create a new one but the idea of running the same workflow at the same time might be the key I will give that a bash on Monday.

Hi @Jon
Were you able to replicate it?
We are facing it quite frequently.

Also, if you need any assistance in debugging the issue, kindly let us know. We can give you any data that’s required. Such as extended logs etc etc.

We are facing it in every 2 out of 3 runs.