Kubernetes deployment can't make network requests

Hi! I’ve created a deployment file with a load balancer IP on my local network. I also set up a PVC for the deployment, which is working great for persisting data.

I made a workflow with a Webhook trigger, and tested with my load balancer IP and it worked great. It was from a template, and I used Postman to test it (I had to change localhost to my loadbalancer IP but that was no big deal), and all that was very happy!

However, when I make any outbound http request, I receive the below error.

Describe the issue/error/question

I originally was trying to add my GitHub PAT for auth. It saved my auth as expected, but then gave a pretty nondescript error

I then played with just an http request node, trying to do a get to the Star Wars API (my favorite for testing) at https://swapi.dev/api/. I then get a DNS resolution error?

What is the error message (if any)?

{"status":"rejected","reason":{"message":"getaddrinfo EAI_AGAIN swapi.dev","name":"Error","stack":"Error: getaddrinfo EAI_AGAIN swapi.dev\n    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)","code":"EAI_AGAIN"}}

Please share the workflow

Information on your n8n setup

  • n8n version: 0.183.0
  • Database you’re using (default: SQLite): SQLite (in a PVC)
  • **Running n8n with the execution process [own(default), main]:**own
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker image, running on a local microk8s cluster.

Deployment/Service file:

---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: n8n-deployment
   labels:
     app: n8n
 spec:
   replicas: 1
   strategy:
         type: RollingUpdate
         rollingUpdate:
            maxSurge: 1
   selector:
      matchLabels:
        app: n8n
   template:
     metadata:
       labels:
         app: n8n

     spec:
      containers:
         - name: n8n
           image: n8nio/n8n
           imagePullPolicy: Always
           ports:
             - name: web
               containerPort: 5678
           volumeMounts:
             - name: n8n-data
               mountPath: /home/node/.n8n             
      volumes:
        - name: n8n-data
          persistentVolumeClaim:
            claimName: pvc-n8n

      nodeSelector:
        server: mini 

---
kind: Service
apiVersion: v1
metadata:
  name: n8n-service
spec:
  selector:
    app: n8n
  ports:
  - name: web
    protocol: TCP
    port: 5678
    targetPort: 5678
  type: LoadBalancer
  loadBalancerIP: 10.0.0.70

For good measure, screenshots of the webhook workflow

1 Like

Woke up and wanted to do some more troubleshooting.

I decided to hit another node on my local network. Got through without any issue.

Decided then to try out a public IP address. Used Google’s DNS server address, https://8.8.8.8

{"status":"rejected","reason":{"message":"getaddrinfo EAI_AGAIN dns.google","name":"Error","stack":"Error: getaddrinfo EAI_AGAIN dns.google\n    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)","code":"EAI_AGAIN"}}

So this is wild to me? It recognized dns.google but doesn’t get anywhere past that?

1 Like

Also tried an nslookup for google.com and got the following using its IP address

{"status":"rejected","reason":{"message":"self signed certificate","name":"Error","stack":"Error: self signed certificate\n    at TLSSocket.onConnectSecure (node:_tls_wrap:1532:34)\n    at TLSSocket.emit (node:events:527:28)\n    at TLSSocket._finishInit (node:_tls_wrap:946:8)\n    at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:727:12)","code":"DEPTH_ZERO_SELF_SIGNED_CERT"}}
1 Like

Also, in case this may help, I use a separate deployment for Pihole, with an Unbound deployment for its upstream server.

Whole network uses my Pihole for DNS.

I checked my logs in Pihole and don’t see any DNS requests for these IPs or the swapi address.

Makes me think the network requests aren’t getting out of the n8n pod at all

Whelp, my buddy who got me into Kubernetes and microk8s look at the post, and had it in about two seconds.

I needed to enable the DNS addon for microk8s and specify my local IP address for that DNS.

microk8s enable dns:10.0.0.80

Easy peasy. I’m up with n8n and off to do some fun stuff with it :slight_smile:

3 Likes

I am a bit late to the game, but I have been trying toget N8n running on a VPS cluster with no luck.

When you say “With my local IP address” Do you think I should use the public IP address of the VPS server, (naive question part) a local address inside the cluster? for example: 127.0.0.1 ?

root@blablabla:~# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy