We are recently testing the cloud offering of n8n. We have 3 postgres RDS instances in AWS which reside in private subnets accessible only through a bastion host. When setting the credentials and selecting ssh tunnel and adding the authentication information, we keep getting an error. Same credentials work if testing SSH only.
What is the error message (if any)?
Invalid username
Please share your workflow
NA
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Does it just say invalid username or is there more to it? Normally we would return the error we get back from the service but having the full error / stack will help to work out if this is an issue with the tunnel or connecting to postgres once the tunnel is established.
Hi @Jon . Thanks a lot for your reply. I’ll love to give you logs or something more verbose but that is literally all I am getting. I even tried with the self-hosted n8n instance we are running but I am getting the same result.
This is the only output I get when testing the connection:
This is the output I get from a SELECT statement using those credentials:
Error: Invalid username at Client.connect (/usr/local/lib/node_modules/n8n/node_modules/ssh2/lib/client.js:203:13) at /usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/transport/index.js:99:23 at new Promise (<anonymous>) at configurePostgres (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/transport/index.js:84:26) at Object.router (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/router.js:39:36) at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:652:28) at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:596:53
Interesting enough, I don’t see any traffic coming to the subnet nor the bastion host when trying to connect to Postgres; however, I do see traffic coming when retrying the connection from SSH credentials. Traffic comes from 20.113.47.122.
ssh log from the bastion host: Jul 7 14:04:10 ip-172-31-102-75 sshd[10498]: Accepted publickey for ec2-user from 20.113.47.122 port 5376 ssh2: RSA SHA256:49aMzQ51uvL6hkX7NbeV52CO6bInlisPN4zfi2RyHlE Jul 7 14:04:11 ip-172-31-102-75 sshd[10498]: pam_unix(sshd:session): session opened for user ec2-user by (uid=0) Jul 7 14:04:11 ip-172-31-102-75 sshd[10532]: Received disconnect from 20.113.47.122 port 5376:11: Jul 7 14:04:11 ip-172-31-102-75 sshd[10532]: Disconnected from 20.113.47.122 port 5376 Jul 7 14:04:11 ip-172-31-102-75 sshd[10498]: pam_unix(sshd:session): session closed for user ec2-user
Yeah those are currently the only IPs we are using for outbound traffic on Cloud. I guess if the SSH node is connecting correctly then the IPs are all good.
What do you see when you try it from your local instance as well is it the same thing?
Yes sir. If I try connecting the database using ssh tunneling through the bastion from the self hosted docker instance we are running on EC2 (so going to EC2 to EC2 to RDS) it gives me the same output. And same thing, if I try the ssh credentials, does do connect. I’m starting to believe that invalid user name is not really the problem but that is just an assumption. We decided to test the cloud flavor because we were having some issues inserting data from postgres db to another (both RDS), it seems it wasn’t inserting the NULL values, instead, it was inserting empty values but that is another topic. Any help is much appreciated.
I did take a look at the code and we are not setting invalid username as an option so it could be coming back from the package somewhere.
I will see if I can free up a bit of time early next week to give it a test locally rather than setting up an RDS instance just to check if it generally works.
I have just given it a go and it looks like this issue is related to a problem we introduced when we started to mask the private keys. This issue only happens if you are using Private keys a quick test with a password instead was working but this is not ideal.
I believe we did fix the private key issue in the SSH node and we must have missed a couple of nodes, I will see what is involved to get this one fixed.
Hi @Jon - thanks for looking into this. Is this fix still on the n8n roadmap? We will continue to use the self-hosted community edition until we are able to connect to our DB externally through SSH.
Hi @Jon thanks a lot for the follow up in this issue. We have upgraded today to n8n v 1.3.1 in the cloud workspace and it seems the issue is partially resolved. What I mean by that is that an ssh connection can be established but it seems the process that opens the connection it never gets closed, giving below error message :
If I restart the workspace and try the ssh tunneling it will work, because that will kill the open process but as soon as I test the connection again, the same error will appear. Let me know if you require any further detail.