PostgreSQL Credentials using SSH tunnel

Describe the problem/error/question

We are recently testing the cloud offering of n8n. We have 3 postgres RDS instances in AWS which reside in private subnets accessible only through a bastion host. When setting the credentials and selecting ssh tunnel and adding the authentication information, we keep getting an error. Same credentials work if testing SSH only.

What is the error message (if any)?

Invalid username

Please share your workflow

NA

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • **n8n version:**0.233.1 (Latest Stable)
  • **Database (default: SQLite):**Cloud offering
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):**none
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):**cloud
  • Operating system:

Hey @jespinoza,

Welcome to the community :raised_hands:

Does it just say invalid username or is there more to it? Normally we would return the error we get back from the service but having the full error / stack will help to work out if this is an issue with the tunnel or connecting to postgres once the tunnel is established.

Hi @Jon . Thanks a lot for your reply. I’ll love to give you logs or something more verbose but that is literally all I am getting. I even tried with the self-hosted n8n instance we are running but I am getting the same result.

This is the only output I get when testing the connection:
image

Regards,
JC

Hey @jespinoza,

What happens if you ignore the credential test and try to use the node?

This is the output I get from a SELECT statement using those credentials:

Error: Invalid username at Client.connect (/usr/local/lib/node_modules/n8n/node_modules/ssh2/lib/client.js:203:13) at /usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/transport/index.js:99:23 at new Promise (<anonymous>) at configurePostgres (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/transport/index.js:84:26) at Object.router (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/router.js:39:36) at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:652:28) at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:596:53

Hey @jespinoza,

So that looks like it is coming back from the tunnel connection itself, I would have expected the same details to fail outside of n8n as well.

Are you using a key or password for authentication? Is there anything in the logs on the bastion host?

Interesting enough, I don’t see any traffic coming to the subnet nor the bastion host when trying to connect to Postgres; however, I do see traffic coming when retrying the connection from SSH credentials. Traffic comes from 20.113.47.122.

AWS Flowlogs:


ssh log from the bastion host:
Jul 7 14:04:10 ip-172-31-102-75 sshd[10498]: Accepted publickey for ec2-user from 20.113.47.122 port 5376 ssh2: RSA SHA256:49aMzQ51uvL6hkX7NbeV52CO6bInlisPN4zfi2RyHlE Jul 7 14:04:11 ip-172-31-102-75 sshd[10498]: pam_unix(sshd:session): session opened for user ec2-user by (uid=0) Jul 7 14:04:11 ip-172-31-102-75 sshd[10532]: Received disconnect from 20.113.47.122 port 5376:11: Jul 7 14:04:11 ip-172-31-102-75 sshd[10532]: Disconnected from 20.113.47.122 port 5376 Jul 7 14:04:11 ip-172-31-102-75 sshd[10498]: pam_unix(sshd:session): session closed for user ec2-user

Just to confirm, I added below IPs:

20.79.227.226
20.79.72.36
20.113.47.122
20.218.202.73
20.79.232.36

Are those the only IP’s I should be allowing?
Regards,
JC

Hey @jespinoza,

Yeah those are currently the only IPs we are using for outbound traffic on Cloud. I guess if the SSH node is connecting correctly then the IPs are all good.

What do you see when you try it from your local instance as well is it the same thing?

Yes sir. If I try connecting the database using ssh tunneling through the bastion from the self hosted docker instance we are running on EC2 (so going to EC2 to EC2 to RDS) it gives me the same output. And same thing, if I try the ssh credentials, does do connect. I’m starting to believe that invalid user name is not really the problem but that is just an assumption. We decided to test the cloud flavor because we were having some issues inserting data from postgres db to another (both RDS), it seems it wasn’t inserting the NULL values, instead, it was inserting empty values but that is another topic. Any help is much appreciated.

Regards,
JC

Hey @jespinoza,

I did take a look at the code and we are not setting invalid username as an option so it could be coming back from the package somewhere.

I will see if I can free up a bit of time early next week to give it a test locally rather than setting up an RDS instance just to check if it generally works.

Thanks a lot @Jon really appreciate the help here. Have an excellent weekend ahead.

Regards,
JC

1 Like

Hey @Jon , hope everything is well at your end. Just wondering if you have any update on this topic.

Best Regards,
JC

Hey @jespinoza,

To be honest I had not had a chance to look at this properly yet I have been caught up with some other issues. I will test this now.

Hey @jespinoza,

I have just given it a go and it looks like this issue is related to a problem we introduced when we started to mask the private keys. This issue only happens if you are using Private keys a quick test with a password instead was working but this is not ideal.

I believe we did fix the private key issue in the SSH node and we must have missed a couple of nodes, I will see what is involved to get this one fixed.

Hi @Jon , happy Friday. Thanks a lot for the provided information. Looking forward to hearing from you.

Regards,
JC

Hi @Jon - thanks for looking into this. Is this fix still on the n8n roadmap? We will continue to use the self-hosted community edition until we are able to connect to our DB externally through SSH.

Thanks again,
David

Hey @dahmadi,

From 1.2.1 the tunnel will work with SSH keys.

2 Likes

Excellent, thanks so much @Jon!

1 Like

Hi @Jon thanks a lot for the follow up in this issue. We have upgraded today to n8n v 1.3.1 in the cloud workspace and it seems the issue is partially resolved. What I mean by that is that an ssh connection can be established but it seems the process that opens the connection it never gets closed, giving below error message :
image
If I restart the workspace and try the ssh tunneling it will work, because that will kill the open process but as soon as I test the connection again, the same error will appear. Let me know if you require any further detail.

Regards,
JC

Hey @jespinoza,

Interesting I didn’t run into that issue locally when testing or on my main n8n instance.

I will check the Postgres tunnel again, does this always happen or does it sometimes work and how many input items are going into the node?