POSTGRES ERROR: Connection pool of the database object has been destroyed

Share my flow:

https://gist.githubusercontent.com/Jorgelig/5ec460d8704200bf892bbc3589d59376/raw/41376ccc5dedf630a4adb8bd83962671dda6c0c9/postgresql_n8n.json

I’ve tried to batch items even 50 at a time and it keeps failing.

Facing same issue with my workflows aswell, issue started in 1.1.1

Hey @Roney_Dsilva,

Can you share a workflow that reproduces this issue? So far I am not able to reproduce it, I am aware that this has been fixed by another user by tweaking the database settings for the usage so it would be worth looking at that as well if it is an option.

Hi @Jon

Doesn’t happen every time though, its happen like for alternate executions
The current flow where am getting this is quite complex, will try to replicate it with a simpler flow and share the same with you

When it happens is it usually with single items like that?

Which version of n8n are you running and is your Postgres instance from a cloud provider or is it a docker image?

HI @Jon
It happens for Single item,
n8n version is 1.1.1
Postgres is Docker image locally deployed in same network as n8n
postgres:12.4
same of postgres:14

Hi @Jon
I noticed this happening quite frequently when you are trying to perform multiple parallel db actions.

eg: if you have an active workflow running in the background and try to do something else in your current workflow, you get this error.

It sounds like some keep-alive issue that is overloading the connections.

Error: Connection pool of the database object has been destroyed.
    at /usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/connect.js:24:25
    at new Promise (<anonymous>)
    at Object.promise (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/promise-parser.js:30:20)
    at poolConnect (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/connect.js:20:19)
    at Object.pool (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/connect.js:176:24)
    at Database.query (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/database.js:330:36)
    at Database.obj.any (/usr/local/lib/node_modules/n8n/node_modules/pg-promise/lib/database.js:772:30)
    at getTableSchema (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/helpers/utils.js:234:30)
    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/database/update.operation.js:229:62)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.router (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/router.js:49:30)
    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:649:19)
    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:631:53

Maybe this will help.

Also see this,
image

We had total of 6 executions happening simultaneously, 2 failed, rest succeeded.

Hey @MayurVirkar,

Interesting, We don’t generally keep the connection alive and each item query will create a new one but the idea of running the same workflow at the same time might be the key I will give that a bash on Monday.

Hi @Jon
Were you able to replicate it?
We are facing it quite frequently.

Also, if you need any assistance in debugging the issue, kindly let us know. We can give you any data that’s required. Such as extended logs etc etc.

We are facing it in every 2 out of 3 runs.

He managed to replicate today, check it out here.

Hey @MayurVirkar,

I did indeed as mentioned above, Are you only seeing the issue when running multiple queries or just one?

One query, but when its run in parallel with some other background workflow.

meaning, when I run the workflow (single item execution) when nothing is running in the background, then everything is fine.

But when I run the same workflow when something is running in the background, i get the above error.

I couldn’t get it to fail when I tried one query with background running queries as well for me it took a lot of data.

Are you using a standard Postgres docker image or something else?

@Jon

Standard docker image.

By reducing the number of parallel executions, we could reduce the failure rate.
First it was 2/3 now its 1/3

@MayurVirkar for your setup it could be worth tweaking the config of Postgres but we will know more once someone picks up the dev ticket.

@Jon How can I track the progress of this issue resolution?

Hey @alexandre.iramos,

Welcome to the community :tada:

You can keep an eye on the GitHub PRs or this thread, Once we have popped the PR in and released a fix there will be a message posted here.

1 Like

Fix can be found in the PR below which will be released soon.

2 Likes

New version [email protected] got released which includes the GitHub PR 7074.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.