GKE Ingress

Describe the problem/error/question

We are trying to deploy an instance of n8n on one of our GKE clusters… everything was ok until we decide to go from Load Balancer to Ingress… The same deployment that through the Load Balancer works like a charm, do not work when we try to deploy a GKE Ingress, the Backend Config never gets HEALTHY, but I repeat is the same deployment, we get to the nodeports, we have the webserver running, we confirm everything is ok !! Just the backend config do not want to move on to HEALTHY !!! We are very frustrated !!!

What is the error message (if any)?

Backendconfig UNHEALTHY

Please share your workflow

No workflow yet, it doesn’t even work !!!

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @pablo.pinargote :wave: Welcome to the community :tada:

Can you go into a little bit more detail about what you mean by ‘backend config’? Is it the GKE backend config object? There’s some limitations that are also mentioned here, if so: Ingress configuration on Google Cloud  |  Google Kubernetes Engine (GKE)

If you could give us some more details on exactly how you’ve configured n8n (and possibly some config settings you may have, of course with any sensitive information like keys redacted), that would be extremely helpful in diagnosing what happened here :bowing_man:

We also have a guide here on Kubernetes that contains some caveats: Google Cloud | n8n Docs

If you do eventually get n8n up and running but it’s starting slower than expected, you may need to adjust the timeout: Ingress configuration on Google Cloud  |  Google Kubernetes Engine (GKE)

What we are doing is pretty simple, we follow tour guide and end up with this terraform definitions:

resource “google_compute_global_address” “default” {
name = “n8n-playground-ip”
}

resource “google_dns_record_set” “n8n” {
name = “n8n.my-domain.dev.”
type = “A”
ttl = 300

managed_zone = “my-domain-dev” #This exists an its ok

rrdatas = [google_compute_global_address.default.address]

}

resource “kubernetes_service_v1” “node_port” {
metadata {
labels = {
name = “n8n-webserver-np”
}
name = “n8n-webserver-np”
}

spec {
selector = {
app = “n8n-webserver”
}
port {
port = 5678
target_port = 5678
name = “web”
}
type = “NodePort”
}

}

resource “kubernetes_deployment” “n8n_webserver” {
metadata {
name = “n8n-webserver”
labels = {
app = “n8n-webserver”
}
}

spec {
replicas = 1

selector {
  match_labels = {
    app = "n8n-webserver"
  }
}

template {
  metadata {
    labels = {
      app = "n8n-webserver"
    }
  }

  spec {
    container {
      image = "n8nio/n8n:1.12.0"
      name  = "n8n-webserver"
      port {
        container_port = 5678
      }
      env {
        name  = "N8N_PORT"
        value = "5678"
      }
      env {
        name  = "N8N_PROTOCOL"
        value = "http"
      }
      env {
        name  = "NODE_ENV"
        value = "production"
      }
      env {
        name  = "N8N_ENCRYPTION_KEY"
        value = "---"
      }
      env {
        name  = "DB_TYPE"
        value = "postgresdb"
      }
      env {
        name  = "DB_POSTGRESDB_HOST"
        value = "emapper-db-np.default.svc.cluster.local" #This exists an its ok
      }
      env {
        name  = "DB_POSTGRESDB_PORT"
        value = "5432"
      }
      env {
        name  = "DB_POSTGRESDB_USER"
        value = "---"
      }
      env {
        name  = "DB_POSTGRESDB_PASSWORD"
        value = "---"
      }
      env {
        name  = "DB_POSTGRESDB_SCHEMA"
        value = "n8n"
      }

      readiness_probe {
        http_get {
          path = "/"
          port = 5678 
        }
        initial_delay_seconds = 10 
        period_seconds = 5         
      }

    }
  }
}

}
}

Until here everithing is ok and if we create a default Load Balancer, every runs pretty well.

But enter the Infamous INGRESS:

resource “kubernetes_ingress_v1” “n8n_ingress” {
depends_on = [kubernetes_service_v1.node_port, google_compute_global_address.default]
metadata {
name = “n8n-ingress”

annotations = {
  "kubernetes.io/ingress.global-static-ip-name" = google_compute_global_address.default.name
  "kubernetes.io/ingress.allow-http": true
  "kubernetes.io/ingress.class": "gce"
}

}

spec {
default_backend {

  service {
    name = kubernetes_service_v1.node_port.metadata[0].name

    port {
      number = 5678
    }
  }

}

}

}

And it does not work:





The instance group is OK, i can access the webserver using curl from other pods, and by the simple load balancer from outside… etc…

But through the ingress NOPE, since it never go up !!

Hope you can help us.
Thnaks.

OMG, the problem turn out to be this:

readiness_probe {
http_get {
path = “/”
port = 5678 # Adjust the port as needed
}
initial_delay_seconds = 10
period_seconds = 5
}

We end up changing this to:

readiness_probe {
http_get {
path = “/healthz”
port = 5678
}
initial_delay_seconds = 10
period_seconds = 5
}

But, by the way in your samples you did not specify the need for a readiness_probe section. You should add that in the docs, for when you want to use an Ingress instead of a LB.

1 Like

Thanks for posting your solution, and for the heads up!