Postgres Node Connection refused error

Describe the problem/error/question

We have multiple workflows that use a Postgres node to read or write data from a database. The nodes were working fine before we upgraded the n8n version to 1.78.1. After upgrading, the node throws a Connection refused or Connection Terminated Unexpectedly error. The workflow linked here uses a Loop. The postgres node works fine for a couple hundred iterations but errors out unexpectedly in the middle.

What is the error message (if any)?

Connection refused
127.0.0.1:34827

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.78.1
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system:

This is just a wild guess, but there’s a chance this is related to a recent change in the Postgres Credentials item that sets maxConnections (presumably to control a pooled connection count).

Open your Postgres credentials node and if the Maximum Number of Connections parameter is there, defaulted to 100, you might try increasing OR decreasing that to see if that affects how your workflow execution goes.

Thank you, we tried both increasing and decreasing the maximum number of connections and still receive the same errors. It seems as if it may be related to the following issue? Postgres node with ssh tunnel - random Connection refused · Issue #12807 · n8n-io/n8n · GitHub

We do connect to the DB via SSH tunneling. Any other recommendations? This has affected several of our workflows that interact with Postgres. And the errors did begin immediately after upgrading the n8n cloud version. Thank you.

@hubschrauber - I also did verify that our max_connections on postgres is > 100

Please let us know if it is possible to downgrade the instance version, if a fix/update is not coming soon. Thanks for your help.

I don’t work for n8n, so I don’t have the “inside track” on what your options would be for downgrading or what the development schedule would be. Tagging @Jon to be sure it gets on someone’s radar though.

1 Like

At the moment there is no workaround that I can think of other than maybe trying a retry, The good news is this issue is say in our “To pick up” queue so should be fixed soon.

For now you could try emailing [email protected] to see if we can restore the old version of n8n for you.

1 Like

Hi @Jon - do you know if this fix is in the near-term pipeline? Any workflows with Postgres that run over a certain period of time requires us to chunk apart and run manually multiple times (each time requires us to restart n8n or modify the Postgres connection to release the connection, it seems)

I don’t want to rollback our environment version because we have updated several workflows to use the Postgres v2.5 node and am not sure what would happen if we rollback since I believe the prior node version we were using was 2.3. Thanks.

Hi @dahmadi
This issue is still under our “to pick up” label and hopefully it won’t take too long for us to have a fix for it, but we’re not quite there yet.

If you change your mind and want to be rolled back let me know!

Hi @mariana-na are there any updates from your end? This is blocking us really hard. We are also happy to contribute or spend some money on it to have it fixed ASAP… please let me know, how to continue here.

Hi @Jannik_Z ,

I’m sorry to hear that this is blocking you so much!
We’re working to get to it asap within our other ongoing priorities.
As it stands, we aim to get to this issue before the end of the quarter, hopefully this month, although it’s not a promise.

I’ll keep you updated!

We did release a small option to allow setting the timeout to something longer, hoping to aleviate early crashes: feat(core): Add a new option to customize SSH tunnel idle timeout by netroy · Pull Request #14522 · n8n-io/n8n · GitHub

1 Like

Hi everyone!

A fix has recently been released for this in v.1.99.0 :tada: fix: Postgres node with ssh tunnel getting into a broken state and not being recreated by despairblue · Pull Request #16054 · n8n-io/n8n · GitHub

This is still a beta version, but it will be moved to stable next week, if you’d rather wait.

Do let us know if you see any issues/errors here. And thanks a lot for the wait! :n8n: