Connection lost error in n8n built on Kubernetes

Hi all, I know many threads have been opened on this subject, but believe me, I’ve tried everything and still can’t figure it out. I set up the structure as follows: I first installed PostgreSQL and Redis on Kubernetes.

helm show values bitnami/redis > redis.yaml
helm upgrade --install n8n-redis bitnami/redis --namespace n8n-ns --create-namespace --values redis.yaml
helm show values bitnami/postgresql > postgresql.yaml
helm upgrade --install n8n-postgresql bitnami/postgresql --namespace n8n-ns --create-namespace --values postgresql.yaml

Then I installed Redis Insight and Pgadmin to test their access.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-ui
  labels:
    app: redis-ui
spec:
  selector:
    matchLabels:
      app: redis-ui
  template:
    metadata:
      labels:
        app: redis-ui
    spec:
      hostAliases:
      - ip: "1.1.1.1"
        hostnames:
        - "redisui.kurremkarmerruk.local"
      containers:
      - name:  redis-ui
        image: redislabs/redisinsight:1.13.0
        imagePullPolicy: IfNotPresent
        env:
        - name: RITRUSTEDORIGINS
          value: http://redisui.kurremkarmerruk.local
        volumeMounts:
        - name: db
          mountPath: /db
        ports:
        - containerPort: 8001
          protocol: TCP
      volumes:
      - name: db
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: redis-ui
spec:
  type: ClusterIP
  ports:
    - name: default
      port: 8001
      targetPort: 8001
  selector:
    app: redis-ui
--- 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: redis-ui
spec:
  ingressClassName: nginx-ingress
  rules:
  - host: redisui.kurremkarmerruk.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: redis-ui
            port:
              number: 8001

helm show values runix/pgadmin4 > pgadmin.yaml
helm upgrade --install n8n-pgadmin runix/pgadmin4 --namespace n8n-ns --create-namespace --values pgadmin.yaml

Once I was sure that Redis and PostgreSQL were working, I created the database and proceeded with the n8n installation.

Passwords are not important because it is a test environment.

CREATE DATABASE n8ndatabase OWNER postgres;
helm pull oci://8gears.container-registry.com/library/n8n --untar
# README
# High level values structure, overview and explanation of the values.yaml file.
# 1. Global and chart wide values, like the image repository, image tag, etc.
# 2. Ingress, (default is nginx, but you can change it to your own ingress controller)
# 3. Main n8n app configuration + kubernetes specific settings
# 4. Worker related settings + kubernetes specific settings
# 5. Webhook related settings + kubernetes specific settings
# 6. Raw Resources to pass through your own manifests like GatewayAPI, ServiceMonitor etc.
# 7. Valkey/Redis related settings and kubernetes specific settings

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  tag: "1.111.0"
imagePullSecrets: []

nameOverride:
fullnameOverride:

hostAliases:
  # - ip: 1.1.1.1
  #   hostnames:
  #     - n8n.kurremkarmerruk.local

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-body-size: "512m"
    nginx.ingress.kubernetes.io/proxy-buffering: "off"
    nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
    # nginx.ingress.kubernetes.io/websocket-services: "system-n8n"
    # nginx.ingress.kubernetes.io/enable-websockets: "true"
    # nginx.ingress.kubernetes.io/upstream-vhost: "n8n.kurremkarmerruk.local"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header Origin "http://n8n.kurremkarmerruk.local";
  className: "nginx-ingress"
  hosts:
    - host: n8n.kurremkarmerruk.local
      paths:
        - /
  tls: []
    # - hosts:
    #    - workflow.example.com
    #  secretName: host-domain-cert

main:
  # See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values.
  config:
    EXECUTIONS_MODE: "regular"
    DB_TYPE: "postgresdb"
    DB_POSTGRESDB_HOST: "n8n-postgresql.n8n-ns.svc.cluster.local"
    DB_POSTGRESDB_PORT: "5432"
    DB_POSTGRESDB_DATABASE: "n8ndatabase"
    DB_POSTGRESDB_USER: "postgres"
    DB_POSTGRESDB_SCHEMA: "public"
    QUEUE_BULL_REDIS_HOST: "n8n-redis-master.n8n-ns.svc.cluster.local"
    QUEUE_BULL_REDIS_PORT: "6379"
    N8N_ENCRYPTION_KEY: "gfQ2hNAdduNQwaqX2yDsr2AX247mgAtVQC2fFUthNFYWVnN3f"
    N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: "true"
    N8N_PORT: "5678"
    N8N_PROTOCOL: "http"
    N8N_SECURE_COOKIE: "false"
    N8N_HOST: "n8n.kurremkarmerruk.local"
    N8N_EDITOR_BASE_URL: "http://n8n.kurremkarmerruk.local/"
    WEBHOOK_URL: "http://n8n.kurremkarmerruk.local/"
    N8N_CORS_ALLOWED_ORIGINS: "http://n8n.kurremkarmerruk.local"
    N8N_TRUST_PROXY: "true"
    N8N_PROXY_HOPS: "1"
    N8N_PUSH_BACKEND: "websocket"
    # N8N_PUSH_ENDPOINT: "/push"
    # N8N_LOG_LEVEL: "debug"
    # DEBUG: "*"
  secret:
    DB_POSTGRESDB_PASSWORD: "2fFUthNFYWVnN3f"
    QUEUE_BULL_REDIS_PASSWORD: "NHxRUfa3gZyKppgscFETqAQAN"

  extraEnv:

  persistence:
    enabled: false 
    type: emptyDir # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    storageClass: "default"
    accessModes:
      - ReadWriteOnce
    size: 1Gi

  extraVolumes: []
  extraVolumeMounts: []
  replicaCount: 1
  deploymentStrategy:
    type: "Recreate"

  serviceAccount:
    create: true
    annotations: {}
    name: ""

  deploymentAnnotations: {}
  deploymentLabels: {}
  podAnnotations: {}
  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
  lifecycle: {}
  command: []

  livenessProbe:
    httpGet:
      path: /healthz
      port: http

  readinessProbe:
    httpGet:
      path: /healthz
      port: http

  initContainers: []

  service:
    enabled: true
    annotations: {}
    type: ClusterIP
    port: 80

  resources: {}

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80

  nodeSelector: {}
  tolerations: []
  affinity: {}

worker:
  enabled: true

  # additional (to main) config for worker
  config:
    EXECUTIONS_MODE: "queue"
    DB_TYPE: "postgresdb"
    DB_POSTGRESDB_HOST: "n8n-postgresql.n8n-ns.svc.cluster.local"
    DB_POSTGRESDB_PORT: "5432"
    DB_POSTGRESDB_DATABASE: "n8ndatabase"
    DB_POSTGRESDB_USER: "postgres"
    DB_POSTGRESDB_SCHEMA: "public"
    QUEUE_BULL_REDIS_HOST: "n8n-redis-master.n8n-ns.svc.cluster.local"
    QUEUE_BULL_REDIS_PORT: "6379"
    QUEUE_BULL_CONCURRENCY: 1
    # N8N_LOG_LEVEL: "debug"
    # DEBUG: "*"

  secret:
    DB_POSTGRESDB_PASSWORD: "2fFUthNFYWVnN3f"
    QUEUE_BULL_REDIS_PASSWORD: "NHxRUfa3gZyKppgscFETqAQAN"

  extraEnv: {}
  concurrency: 10

  persistence:
    enabled: false
    type: emptyDir # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    storageClass: "-"
    accessModes:
      - ReadWriteOnce
    size: 1Gi

  replicaCount: 1
  deploymentStrategy:
    type: "Recreate"

  serviceAccount:
    create: true
    annotations: {}
    name: ""

  deploymentAnnotations: {}
  deploymentLabels: {}
  podAnnotations: {}
  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
  lifecycle: {}
  command: []
  commandArgs: []

  livenessProbe:
    httpGet:
      path: /healthz
      port: http

  readinessProbe:
    httpGet:
      path: /healthz
      port: http
  initContainers: []

  service:
    annotations: {}
    type: ClusterIP
    port: 80

  resources: {}

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80

  nodeSelector: {}
  tolerations: []
  affinity: {}


webhook:
  enabled: true
  # additional (to main) config for webhook
  config:
    EXECUTIONS_MODE: "queue"
    DB_TYPE: "postgresdb"
    DB_POSTGRESDB_HOST: "n8n-postgresql.n8n-ns.svc.cluster.local"
    DB_POSTGRESDB_PORT: "5432"
    DB_POSTGRESDB_DATABASE: "n8ndatabase"
    DB_POSTGRESDB_USER: "postgres"
    DB_POSTGRESDB_SCHEMA: "public"
    QUEUE_BULL_REDIS_HOST: "n8n-redis-master.n8n-ns.svc.cluster.local"
    QUEUE_BULL_REDIS_PORT: "6379"

  # additional (to main) config for worker
  secret:
    DB_POSTGRESDB_PASSWORD: "2fFUthNFYWVnN3f"
    QUEUE_BULL_REDIS_PASSWORD: "NHxRUfa3gZyKppgscFETqAQAN"

  extraEnv: {}
  persistence:
    enabled: false
    type: emptyDir    # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
    storageClass: "-"
    accessModes:
      - ReadWriteOnce
    size: 1Gi

  replicaCount: 1

  deploymentStrategy:
    type: "Recreate"

  nameOverride: ""
  fullnameOverride: ""

  serviceAccount:
    create: true
    annotations: {}
    name: ""

  deploymentAnnotations: {}
  deploymentLabels: {}
  podAnnotations: {}
  podLabels: {}

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000

  securityContext: {}
  lifecycle: {}
  command: []
  commandArgs: []

  livenessProbe:
    httpGet:
      path: /healthz
      port: http

  readinessProbe:
    httpGet:
      path: /healthz
      port: http

  initContainers: []

  service:
    annotations: {}
    type: ClusterIP
    port: 80

  resources: {}
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
  nodeSelector: {}
  tolerations: []
  affinity: {}

extraManifests: []
extraTemplateManifests: []
valkey:
  enabled: false

helm upgrade --install system-n8n . --namespace n8n-ns --create-namespace 

The N8N screen is now appearing. I can start creating a workflow within it, but I am receiving a ‘connection lost’ error in the top right corner.

You have a connection issue or the server is down.
n8n should reconnect automatically once the issue is resolved.

I can’t see anything about this issue in the logs. When I enable debug mode, a lot of logs are generated. I can share them if anyone wants, but I didn’t post them directly because I think there’s just one tiny thing I need to change and that will fix it.

➜  ~ kubectl get pod -n n8n-ns
NAME                                    READY   STATUS    RESTARTS   AGE
n8n-pgadmin-pgadmin4-5fd5b49887-ms8bd   1/1     Running   0          61s
n8n-postgresql-0                        1/1     Running   0          33h
n8n-redis-master-0                      1/1     Running   0          33h
redis-ui-78b664f7c5-zcn78               1/1     Running   0          11m
system-n8n-768cfb777c-9nqhh             1/1     Running   0          104s
system-n8n-webhook-67b475fc7-kbxlr      1/1     Running   0          105s
system-n8n-worker-64b6944b56-ptfsp      1/1     Running   0          104s

➜  ~ k logs -n n8n-ns -f --selector app.kubernetes.io/instance=system-n8n
 - N8N_BLOCK_ENV_ACCESS_IN_NODE -> The default value of N8N_BLOCK_ENV_ACCESS_IN_NODE will be changed from false to true in a future version. If you need to access environment variables from the Code Node or from expressions, please set N8N_BLOCK_ENV_ACCESS_IN_NODE=false. Learn more: https://docs.n8n.io/hosting/configuration/environment-variables/security/

[license SDK] Skipping renewal on init: renewOnInit is disabled in config
[license SDK] Skipping renewal on init: autoRenewEnabled is disabled in config
[license SDK] Skipping renewal on init: license cert is not initialized

n8n worker is now ready
 * Version: 1.111.0
 * Concurrency: 10


There are deprecations related to your environment variables. Please take the recommended actions to update your configuration:
 - N8N_RUNNERS_ENABLED -> Running n8n without task runners is deprecated. Task runners will be turned on by default in a future version. Please set `N8N_RUNNERS_ENABLED=true` to enable task runners now and avoid potential issues in the future. Learn more: https://docs.n8n.io/hosting/configuration/task-runners/
 - N8N_BLOCK_ENV_ACCESS_IN_NODE -> The default value of N8N_BLOCK_ENV_ACCESS_IN_NODE will be changed from false to true in a future version. If you need to access environment variables from the Code Node or from expressions, please set N8N_BLOCK_ENV_ACCESS_IN_NODE=false. Learn more: https://docs.n8n.io/hosting/configuration/environment-variables/security/

[license SDK] Skipping renewal on init: renewOnInit is disabled in config
[license SDK] Skipping renewal on init: autoRenewEnabled is disabled in config
[license SDK] Skipping renewal on init: license cert is not initialized
Version: 1.111.0
Webhook listener waiting for requests.
 - N8N_RUNNERS_ENABLED -> Running n8n without task runners is deprecated. Task runners will be turned on by default in a future version. Please set `N8N_RUNNERS_ENABLED=true` to enable task runners now and avoid potential issues in the future. Learn more: https://docs.n8n.io/hosting/configuration/task-runners/
 - N8N_BLOCK_ENV_ACCESS_IN_NODE -> The default value of N8N_BLOCK_ENV_ACCESS_IN_NODE will be changed from false to true in a future version. If you need to access environment variables from the Code Node or from expressions, please set N8N_BLOCK_ENV_ACCESS_IN_NODE=false. Learn more: https://docs.n8n.io/hosting/configuration/environment-variables/security/

[license SDK] Skipping renewal on init: license cert is not initialized
Version: 1.111.0

Editor is now accessible via:
http://n8n.kurremkarmerruk.local
(node:6) [DEP0060] DeprecationWarning: The `util._extend` API is deprecated. Please use Object.assign() instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
^C%
➜  ~

When I researched this issue using artificial intelligence and manual methods, it appeared that the problem was caused by a reverse proxy, but I was unable to find a clear solution.

My experiments are currently contained within the above YAML file.

I would be very grateful if you could look at the YAML file instead of sending a link to solve the problem. I think I’ve become blind to this issue after looking at it for a month.

1 Like

Hi all,

Today I researched Istio. In this context, I set it up as follows.

helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
✗ helm search repo istio
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/wavefront-adapter-for-istio     2.0.6           0.1.5           DEPRECATED Wavefront Adapter for Istio is an ad...
istio/istiod                            1.27.1          1.27.1          Helm chart for istio control plane                
istio/istiod-remote                     1.23.6          1.23.6          Helm chart for a remote cluster using an extern...
istio/ambient                           1.27.1          1.27.1          Helm umbrella chart for ambient                   
istio/base                              1.27.1          1.27.1          Helm chart for deploying Istio cluster resource...
istio/cni                               1.27.1          1.27.1          Helm chart for istio-cni components               
istio/gateway                           1.27.1          1.27.1          Helm chart for deploying Istio gateways           
istio/ztunnel                           1.27.1          1.27.1          Helm chart for istio ztunnel components   

CRDs installation.

helm show values istio/base > istio-base.yaml
helm upgrade --install istio-base istio/base --namespace istio-system --create-namespace  --values istio-base.yaml
✗ kubectl get crd -oname | grep --color=never 'istio.io'
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.security.istio.io
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/peerauthentications.security.istio.io
customresourcedefinition.apiextensions.k8s.io/proxyconfigs.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/requestauthentications.security.istio.io
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/sidecars.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/telemetries.telemetry.istio.io
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/wasmplugins.extensions.istio.io
customresourcedefinition.apiextensions.k8s.io/workloadentries.networking.istio.io
customresourcedefinition.apiextensions.k8s.io/workloadgroups.networking.istio.io

Istio Discovery installation

helm show values istio/istiod > istio-istiod.yaml
helm upgrade --install istio-istiod istio/istiod --namespace istio-system --create-namespace  --values istio-istiod.yaml --wait
✗ kgp -n istio-system 
NAME                      READY   STATUS    RESTARTS   AGE
istiod-868755b6bf-rzlb9   1/1     Running   0          4m4s

✗ kg svc -n istio-system 
NAME     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                 AGE
istiod   ClusterIP   10.0.177.230   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP   4m26s

İstio Gateway installation

helm show values istio/gateway > istio-gateway.yaml
helm upgrade --install istio-ingressgateway istio/gateway --namespace istio-ingress --create-namespace  --values istio-gateway.yaml --wait
✗ kg svc -n istio-ingress 
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.0.181.159   20.238.227.197   15021:31256/TCP,80:31053/TCP,443:31106/TCP   65s

✗ kg pod -n istio-ingress
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-5b86b655cc-ghgwq   1/1     Running   0          70s

And after all, I create gateway and virtualservice for external connection.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: n8n-gateway
  namespace: istio-ingress
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "n8n.kurremkarmerruk.local"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: n8n-virtualservice
  namespace: n8n-ns
spec:
  hosts:
  - "n8n.kurremkarmerruk.local"
  gateways:
  - istio-ingress/n8n-gateway
  http:
  - route:
    - destination:
        host: system-n8n.n8n-ns.svc.cluster.local
        port:
          number: 80

Right now Connection Lost error is lost.

The problem has been resolved, but if anyone identifies the error in nginx-ingress, I would appreciate it if they could share it so that it can be a solution for others.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.