Postgres node - Called end on pool more than once

Hey @tool_dmvf_ltd,

Having the workflow JSON tells us which node settings you are using and what node version it is which may help us reproduce the issue. There are 5 versions of the Postgres node so it could be useful to know, It would also be handy to know how many records there were in the tabel that you are clearing.

{
“meta”: {
“instanceId”: “aaaae7d799aa296738b10d36cabcdd0ad3478d6f62c8e3a4f9db5495a20dfe90”
},
“nodes”: [
{
“parameters”: {
“rule”: {
“interval”: [
{
“field”: “hours”,
“triggerAtMinute”: 5
}
]
}
},
“id”: “3205dfae-3b4a-47cd-b443-d34615e748a3”,
“name”: “Schedule Trigger”,
“type”: “n8n-nodes-base.scheduleTrigger”,
“typeVersion”: 1.1,
“position”: [
620,
280
]
},
{
“parameters”: {
“operation”: “deleteTable”,
“schema”: {
“__rl”: true,
“mode”: “list”,
“value”: “public”
},
“table”: {
“__rl”: true,
“value”: “rinoads_raw_data_daily”,
“mode”: “list”,
“cachedResultName”: “rinoads_raw_data_daily”
},
“options”: {}
},
“id”: “0fe7f6a0-9cbf-46eb-8e18-a6c83e43bf6a”,
“name”: “Postgres”,
“type”: “n8n-nodes-base.postgres”,
“typeVersion”: 2.3,
“position”: [
840,
280
],
“alwaysOutputData”: true,
“credentials”: {
“postgres”: {
“id”: “2”,
“name”: “Postgres account”
}
}
},
{
“parameters”: {
“values”: {
“string”: [
{
“name”: “DateFrom”,
“value”: “={{ $today.minus({days: 0}).toFormat(‘yyyy-MM-dd’) }}”
},
{
“name”: “DateTo”,
“value”: “={{ $today.minus({days: 0}).toFormat(‘yyyy-MM-dd’) }}”
}
]
},
“options”: {}
},
“id”: “babac746-1660-420a-8d05-4b1c80941996”,
“name”: “Set”,
“type”: “n8n-nodes-base.set”,
“typeVersion”: 2,
“position”: [
1060,
280
]
},
{
“parameters”: {
“method”: “POST”,
“url”: “https://mydomain.com/api/xxxxx”,
“sendBody”: true,
“specifyBody”: “json”,
“jsonBody”: “={\n "api_key": "",\n "date_from": "{{ $json["DateFrom"] }}",\n "date_to": "{{ $json["DateTo"] }}"\n}”,
“options”: {}
},
“id”: “a48d4250-efcb-4f91-b9cb-e717f9dda64a”,
“name”: “HTTP Request”,
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 4.1,
“position”: [
1280,
280
]
},
{
“parameters”: {
“operation”: “executeQuery”,
“query”: “INSERT INTO "public".rinoads_raw_data_daily\n\t( lead_id, lead_date, publisher_id, offer_id, offer_name, offer_category, status, leads, qty, payout)\nVALUES (‘{{ $json.lead_id }}’,\n’{{ $json.lead_date }}‘,\n{{ $json.publisher_id }},\n{{ $json.offer_id }},\n’{{ $json.offer_name }}‘,\n’{{ $json.offer_category }}‘,\n’{{ $json.status }}',\n{{ $json.leads }},\n{{ $json.qty }},\n{{ $json.payout }})”,
“options”: {}
},
“id”: “0d088c30-bba8-489a-a01a-a72cd1cb26af”,
“name”: “Postgres1”,
“type”: “n8n-nodes-base.postgres”,
“typeVersion”: 2.3,
“position”: [
1500,
280
],
“alwaysOutputData”: true,
“credentials”: {
“postgres”: {
“id”: “2”,
“name”: “Postgres account”
}
}
},
{
“parameters”: {
“operation”: “executeQuery”,
“query”: “INSERT INTO public.offer ( offer_id, offer_name, offer_payout, fk_user_network_id, fk_off_cat_id)\nSELECT \ntext(a.offer_id)||coalesce(b.offer_name_suffix,‘’) as campaign_id\n,a.offer_name\n,max(payout) as Payout\n,b.user_network_id\n,case WHEN lower(offer_category) LIKE ‘%nutra%’ then ‘09708c8b-b75d-4373-b51a-a35b36ed30d8’::uuid else ‘d159cb70-79ff-47a4-8bd4-d8f973ec16bb’::uuid end\nFROM public.rinoads_raw_data_daily a\ninner join public.user_network b on (b.user_aff_id::integer = publisher_id)\nwhere fk_network_id = ‘00da187d-b1a1-49ce-baa5-09d637faf12c’\ngroup by a.offer_name, a.offer_id, b.user_network_id, b.offer_name_suffix, offer_category\nON CONFLICT(offer_id,fk_user_network_id) DO nothing;”,
“options”: {}
},
“id”: “ed4b597d-fb55-4e34-bc34-0826ecb396cd”,
“name”: “Postgres2”,
“type”: “n8n-nodes-base.postgres”,
“typeVersion”: 2.3,
“position”: [
1720,
280
],
“alwaysOutputData”: true,
“credentials”: {
“postgres”: {
“id”: “2”,
“name”: “Postgres account”
}
}
},
{
“parameters”: {
“operation”: “executeQuery”,
“query”: “INSERT INTO "public".user_financial_daily\n\t( fk_offer_id, financial_date, amount_spent, amount_spent_plus_fee, amount_revenue, amount_revenue_approved, amount_revenue_pending, amount_revenue_rejected, amount_conversion\n, conversions_approved, conversions_pending, conversions_rejected, fk_user_network_id, fk_user_id,spent_rate_fees) \nselect \n text(offer_id)\n,to_date(to_char(lead_date, ‘YYYY-MM-DD’), ‘YYYY-MM-DD’)\n,0\n,0\n,sum(case when status = ‘Approved’ then payout when status = ‘Pending’ then payout else 0 end) amount_tot\n,sum(case when status = ‘Approved’ then payout else 0 end) amount_approved\n,sum(case when status = ‘Pending’ then payout else 0 end) amount_pending\n,sum(case when status = ‘Canceled’ then payout else 0 end) amount_rejected\n,sum(case when status = ‘Approved’ then leads when status = ‘Pending’ then leads else 0 end) conv_tot\n,sum(case when status = ‘Approved’ then leads else 0 end) conv_approved\n,sum(case when status = ‘Pending’ then leads else 0 end) conv_pending\n,sum(case when status = ‘Canceled’ then leads else 0 end) conv_rejected\n,b.user_network_id\n,b.fk_user_id\n,0\nfrom public.rinoads_raw_data_daily\ninner join public.user_network b on (b.user_aff_id::integer = publisher_id)\nwhere fk_network_id = ‘00da187d-b1a1-49ce-baa5-09d637faf12c’\ngroup by text(offer_id), to_date(to_char(lead_date, ‘YYYY-MM-DD’), ‘YYYY-MM-DD’), b.user_network_id, b.fk_user_id\nON conflict (fk_offer_id,\nfinancial_date,\nfk_user_network_id,\nfk_user_id)\nDO \n UPDATE SET amount_revenue = excluded.amount_revenue,\n amount_revenue_approved = excluded.amount_revenue_approved,\n amount_revenue_rejected = excluded.amount_revenue_rejected,\n amount_conversion = excluded.amount_conversion,\n conversions_approved = excluded.conversions_approved,\n conversions_rejected = excluded.conversions_rejected”,
“options”: {}
},
“id”: “5802ca35-6aee-4d94-a317-fd7225a0818a”,
“name”: “Postgres3”,
“type”: “n8n-nodes-base.postgres”,
“typeVersion”: 2.3,
“position”: [
1940,
280
],
“credentials”: {
“postgres”: {
“id”: “2”,
“name”: “Postgres account”
}
}
}
],
“connections”: {
“Schedule Trigger”: {
“main”: [
[
{
“node”: “Postgres”,
“type”: “main”,
“index”: 0
}
]
]
},
“Postgres”: {
“main”: [
[
{
“node”: “Set”,
“type”: “main”,
“index”: 0
}
]
]
},
“Set”: {
“main”: [
[
{
“node”: “HTTP Request”,
“type”: “main”,
“index”: 0
}
]
]
},
“HTTP Request”: {
“main”: [
[
{
“node”: “Postgres1”,
“type”: “main”,
“index”: 0
}
]
]
},
“Postgres1”: {
“main”: [
[
{
“node”: “Postgres2”,
“type”: “main”,
“index”: 0
}
]
]
},
“Postgres2”: {
“main”: [
[
{
“node”: “Postgres3”,
“type”: “main”,
“index”: 0
}
]
]
}
}
}

Hey guys,

I was testing it our setup (togehter with Emils) and I have a theory here:
end on pool can be called multiple times only in case if ANOTHER workflow executed a SQL fully while this SQL in question was executing. Imagine this:

Workflow A > Long SQL query (30 secs)
Workflow B > Quick SQL query (1 sec)
Workflow B > SQL query finishes and calls pool.end()
Wokrflow A > SQL query finishes and TRIES to call pool.end()
Workflow A > ERROR - end on pool called more than once

I am not sure on the backend, but can it be that different workflows will share the same postgres pool ?

Hey @Daniil_K,

Welcome to the community :raised_hands:

We don’t reuse connections so every call will create a new connection to the database but you could be onto something there, I wonder if Postgres tries to do something clever on the connection.

I am away until Monday now but I will do some testing when I return to see if I can reproduce this, Once we can reproduce it we should be able to fix this fairly quickly.

Hi @Jon
Maybe this can be worked on for reusable connections

1 Like

@Roney_Dsilva maybe but that is a more complex feature to add.

Hey @Jon ! Any news on this error?

Hey @Emils_Bisenieks,

Not yet, Still trying to find a way to reproduce it. Do you have a reliable way to create the error?

Facing this issue even with Retry.

Saw this after our security scanner bombarded the API with requests, so maybe a JMeter load test might help replicate this

Maybe this might help

Hey Jon,

I was trying to replicate my theory with n8n cloud, with no luck as well.

However, what I managed to get is that on our setup SOMETIMES the SQL request manages to work. This let me thinking that the issue might be a combination of using v1 and v2 nodes, where v1 are quick and v2 process heavier requests.

I did some digging in you repo and it seems that in v2 your code uses await on pg.pool.end() and v1 just calls pgp.end() and exits.

I am not sure this is related, but could be a direction to test? In the meantime I will try putting a v1 node to my cloud sandbox and see what happens

QUICK UPDATE: I managed to replicate the behaviour on n8n cloud by combining v1 nodes and v2 nodes. Jon, please get in touch with me - I will give you access to the account to play around :slight_smile:

To replicate the problem pretty reliably, you need the following -
A workflow that has a very slow request with a v2 node, like this:

and another workflow that uses v1 node and does something very quickly (maybe even does updates) :

Now you need to run MAIN query, it should execut for at least 30 seconds due to pg_sleep() function. WHILE it is doing that - run the quick workflow a few times. When the main query ends - it is guaranteed to throw the Pool End error

Hey @Daniil_K,

I have just tried a v1 Postgres to a v2 then back to a v1 and was not able to reproduce, Can you share the workflow json here? The example workflow further up the thread uses only 2.3 as the version and has the issue reported as well so it could be specific actions that trigger it.

Edit… You have shared it already.

@Daniil_K that has worked, will get the dev ticket created now.

Edit… Interesting that it only happens with the mix which may not fit all cases but this is a start.

1 Like

It took me quite a while to understand that the issue might relate to other workflows that run on schedule or are triggered with webhooks in the background; might be that the people reporting this didn’t think in this direction as well.

Also, when browsing the n8n repo, I found that pg-promise lib (which effectively created and ends pools) is being used not only in PG SQL nodes, but some other too.

1 Like

That is part of why we ask for workflows, They allow us to see more than just the name of the nodes we can see the versions as well which can help.

The internal dev ticket for this is NODE-845, It is set as a High priority so I suspect it will be fixed soon.

Good news, We have a PR in that has just been reviewed and the test case provided is fixed with it so hopefully that will reduce the amount of these and get everything working again.

2 Likes

@Jon thats goods news? When this will be released? We just updated to 1.9.3 from 0.236. and we are running into this errors in production workloads.

Hey @sebasortiz.dev,

Our main releases are weekly so unless we do a 1.11.1 today or early next week it will likely end up in 1.12.0 next week.

New version [email protected] got released which includes the GitHub PR 7417.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.