N8n Pod Stuck in CrashLoopBackOff on Azure AKS

The n8n deployment in the n8n namespace is failing to start and is stuck in a CrashLoopBackOff state. The issue persists despite multiple restart attempts.

Environment:

Kubernetes Version: Check via kubectl version

Cloud Provider: Azure AKS

n8n Image Version: n8nio/n8n:latest

Database: PostgreSQL (Running as postgres-service.n8n.svc.cluster.local)

Storage: PersistentVolumeClaim n8n-claim0

Deployment Details:

Deployment strategy: Recreate

Pod uses an init container (busybox:1.36) to set permissions on /data
n8n environment variables:

DB_TYPE=postgres

DB_POSTGRESDB_HOST=postgres-service.n8n.svc.cluster.local

DB_POSTGRESDB_PORT=5432

DB_POSTGRESDB_DATABASE=n8n

N8N_PROTOCOL=http

N8N_PORT=5678

N8N_LOG_LEVEL=debug

Error Logs:

kubectl get pods -n n8n shows:

NAME                        READY   STATUS             RESTARTS        AGE
n8n-7bf5986bbb-2trxp        0/1     CrashLoopBackOff   6 (2m37s ago)   8m55s
postgres-6665f78f5f-2scgl   1/1     Running            1 (4h38m ago)   4h39m
kubectl describe pod n8n-7bf5986bbb-2trxp -n n8n shows:

State: Waiting - Reason: CrashLoopBackOff

Last State: Terminated - Reason: Error - Exit Code: 2

Possible Causes Investigated:

Database connection issues (verified PostgreSQL is running)

Storage mount permissions (init container runs successfully)

n8n misconfiguration (environment variables appear correct)

Steps to Reproduce:

Deploy n8n using Kubernetes with the above configuration.

Monitor pod status via kubectl get pods -n n8n.

Inspect logs using kubectl logs n8n- -n n8n.

Expected Behavior:
The n8n pod should start successfully and be available.

Actual Behavior:
The pod repeatedly crashes with Exit Code 2.

Additional Context:
Any insights or troubleshooting suggestions would be greatly appreciated.

 kubectl exec -it postgres-6665f78f5f-2scgl -n n8n -- psql -U admin -d n8n -h postgres-service.n8n.svc.cluster.local
>>
Password for user admin:
psql (11.16 (Debian 11.16-1.pgdg90+1))
Type "help" for help.

n8n=# \dt
Did not find any relations.
n8n=# \dn
List of schemas
  Name  | Owner
--------+-------
 public | admin
(1 row)

n8n=# \l
                             List of databases
   Name    | Owner | Encoding |  Collate   |   Ctype    | Access privileges
-----------+-------+----------+------------+------------+-------------------
 n8n       | admin | UTF8     | en_US.utf8 | en_US.utf8 |
 postgres  | admin | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | admin | UTF8     | en_US.utf8 | en_US.utf8 | =c/admin         +
           |       |          |            |            | admin=CTc/admin
 template1 | admin | UTF8     | en_US.utf8 | en_US.utf8 | =c/admin         +
           |       |          |            |            | admin=CTc/admin
(4 rows)

n8n=# exit
PS C:\Users\najmul> kubectl logs n8n-7bf5986bbb-2trxp -n n8n
>>
Defaulted container "n8n" out of: n8n, volume-permissions (init)
2025-03-18T14:08:43.922Z | ←[33mwarn←[39m | ←[33mPermissions 0644 for n8n settings file /home/node/.n8n/config are too wide. This is ignored for now, but in the future n8n will attempt to change the permissions automatically. To automatically enforce correct permissions now set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true (recommended), or turn this check off set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false.←[39m {"file":"instance-settings.js","function":"ensureSettingsFilePermissions"}
User settings loaded from: /home/node/.n8n/config
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (audit)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(Use `node --trace-warnings ...` to show where the warning was created)
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (base-command)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (execute-batch)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (execute)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (start)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (webhook)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (worker)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (db:revert)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (export:credentials)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (export:workflow)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (import:credentials)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (import:workflow)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (ldap:reset)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (license:clear)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (license:info)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (list:workflow)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (mfa:disable)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (update:workflow)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
(node:1) Warning: TypeError
module: @oclif/[email protected]
task: findCommand (user-management:reset)
plugin: n8n
root: /usr/local/lib/node_modules/n8n
message: Cannot read properties of undefined (reading 'database')
See more details with DEBUG=*
 ›   Error: command start not found

PS C:\Users\najmul> kubectl logs n8n-7bf5986bbb-2trxp -n n8n >> log.txt

Defaulted container “n8n” out of: n8n, volume-permissions (init)
PS C:\Users\najmul> kubectl describe deployment n8n -n n8n

Name:               n8n
Namespace:          n8n
CreationTimestamp:  Tue, 18 Mar 2025 14:17:36 +0530
Labels:             service=n8n
Annotations:        deployment.kubernetes.io/revision: 10
Selector:           service=n8n
Replicas:           1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:  service=n8n
  Init Containers:
   volume-permissions:
    Image:      busybox:1.36
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chown 1000:1000 /data
    Environment:  <none>
    Mounts:
      /data from n8n-claim0 (rw)
  Containers:
   n8n:
    Image:      n8nio/n8n:latest
    Port:       5678/TCP
    Host Port:  0/TCP
    Command:
      n8n
      start
    Limits:
      memory:  2Gi
    Requests:
      memory:  1Gi
    Environment:
      DB_TYPE:                 postgres
      DB_POSTGRESDB_HOST:      postgres-service.n8n.svc.cluster.local
      DB_POSTGRESDB_PORT:      5432
      DB_POSTGRESDB_DATABASE:  n8n
      DB_POSTGRESDB_USER:      <set to the key 'POSTGRES_NON_ROOT_USER' in secret 'postgres-secret'>      Optional: false
      DB_POSTGRESDB_PASSWORD:  <set to the key 'POSTGRES_NON_ROOT_PASSWORD' in secret 'postgres-secret'>  Optional: false
      N8N_PROTOCOL:            http
      N8N_PORT:                5678
      N8N_LOG_LEVEL:           debug
    Mounts:
      /home/node/.n8n from n8n-claim0 (rw)
  Volumes:
   n8n-claim0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  n8n-claim0
    ReadOnly:   false
   n8n-secret:
    Type:          Secret (a volume populated by a Secret)
    SecretName:    n8n-secret
    Optional:      false
  Node-Selectors:  <none>
  Tolerations:     CriticalAddonsOnly op=Exists
                   kubernetes.azure.com/scalesetpriority op=Exists
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      False   MinimumReplicasUnavailable
OldReplicaSets:  n8n-85db4b8db6 (0/0 replicas created), n8n-5bc69f5598 (0/0 replicas created), n8n-57b499f476 (0/0 replicas created), n8n-67f7d6bcf7 (0/0 replicas created), n8n-94c6868b9 (0/0 replicas created), n8n-8649977478 (0/0 replicas created), n8n-5f589f4775 (0/0 replicas created), n8n-5994b5c849 (0/0 replicas created), n8n-6bf998cd79 (0/0 replicas created)
NewReplicaSet:   n8n-7bf5986bbb (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  53m   deployment-controller  Scaled down replica set n8n-6bf998cd79 to 0 from 1
  Normal  ScalingReplicaSet  53m   deployment-controller  Scaled up replica set n8n-7bf5986bbb to 1
PS C:\Users\najmul> kubectl describe pod n8n-7bf5986bbb-2trxp -n n8n
Name:             n8n-7bf5986bbb-2trxp
Namespace:        n8n
Priority:         0
Service Account:  default
Node:             aks-agentpool-17642114-vmss000001/10.224.0.5
Start Time:       Tue, 18 Mar 2025 19:01:31 +0530
Labels:           pod-template-hash=7bf5986bbb
                  service=n8n
Annotations:      <none>
Status:           Running
IP:               10.244.0.161
IPs:
  IP:           10.244.0.161
Controlled By:  ReplicaSet/n8n-7bf5986bbb
Init Containers:
  volume-permissions:
    Container ID:  containerd://f625582654e0cd4c074c14bb552044804031e41018c573596c4d51f6e1362b7c
    Image:         busybox:1.36
    Image ID:      docker.io/library/busybox@sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      chown 1000:1000 /data
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 18 Mar 2025 19:01:36 +0530
      Finished:     Tue, 18 Mar 2025 19:01:36 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from n8n-claim0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w99k6 (ro)
Containers:
  n8n:
    Container ID:  containerd://884dd12a90346799bf1d008c4f1a8ae9e5677748158ac29d89bb95d81024b74c
    Image:         n8nio/n8n:latest
    Image ID:      docker.io/n8nio/n8n@sha256:5288543ac4dc1ea7149a93e38a24989c913c9007dd2459f6c730ac247c4d958f
    Port:          5678/TCP
    Host Port:     0/TCP
    Command:
      n8n
      start
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Tue, 18 Mar 2025 19:54:08 +0530
      Finished:     Tue, 18 Mar 2025 19:54:10 +0530
    Ready:          False
    Restart Count:  15
    Limits:
      memory:  2Gi
    Requests:
      memory:  1Gi
    Environment:
      DB_TYPE:                 postgres
      DB_POSTGRESDB_HOST:      postgres-service.n8n.svc.cluster.local
      DB_POSTGRESDB_PORT:      5432
      DB_POSTGRESDB_DATABASE:  n8n
      DB_POSTGRESDB_USER:      <set to the key 'POSTGRES_NON_ROOT_USER' in secret 'postgres-secret'>      Optional: false
      DB_POSTGRESDB_PASSWORD:  <set to the key 'POSTGRES_NON_ROOT_PASSWORD' in secret 'postgres-secret'>  Optional: false
      N8N_PROTOCOL:            http
      N8N_PORT:                5678
      N8N_LOG_LEVEL:           debug
    Mounts:
      /home/node/.n8n from n8n-claim0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w99k6 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  n8n-claim0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  n8n-claim0
    ReadOnly:   false
  n8n-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  n8n-secret
    Optional:    false
  kube-api-access-w99k6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 CriticalAddonsOnly op=Exists
                             kubernetes.azure.com/scalesetpriority op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  53m                    default-scheduler  Successfully assigned n8n/n8n-7bf5986bbb-2trxp to aks-agentpool-17642114-vmss000001
  Normal   Pulled     53m                    kubelet            Container image "busybox:1.36" already present on machine
  Normal   Created    53m                    kubelet            Created container volume-permissions
  Normal   Started    53m                    kubelet            Started container volume-permissions
  Normal   Pulled     53m                    kubelet            Successfully pulled image "n8nio/n8n:latest" in 1.441s (1.441s including waiting). Image size: 201772341 bytes.
  Normal   Pulled     53m                    kubelet            Successfully pulled image "n8nio/n8n:latest" in 1.392s (1.392s including waiting). Image size: 201772341 bytes.
  Normal   Pulled     53m                    kubelet            Successfully pulled image "n8nio/n8n:latest" in 1.426s (1.426s including waiting). Image size: 201772341 bytes.
  Normal   Pulling    52m (x4 over 53m)      kubelet            Pulling image "n8nio/n8n:latest"
  Normal   Created    52m (x4 over 53m)      kubelet            Created container n8n
  Normal   Started    52m (x4 over 53m)      kubelet            Started container n8n
  Normal   Pulled     52m                    kubelet            Successfully pulled image "n8nio/n8n:latest" in 1.388s (1.388s including waiting). Image size: 201772341 bytes.
  Warning  BackOff    3m42s (x228 over 53m)  kubelet            Back-off restarting failed container n8n in pod n8n-7bf5986bbb-2trxp_n8n(4f74e1e3-a2c8-4046-9008-b35ae029d255)
PS C:\Users\najmul> kubectl get pods -n n8n
NAME                        READY   STATUS             RESTARTS        AGE
n8n-7bf5986bbb-2trxp        0/1     CrashLoopBackOff   15 (2m7s ago)   54m
postgres-6665f78f5f-2scgl   1/1     Running            1 (5h24m ago)   5h25m
PS C:\Users\najmul>

I just had a simliar issue on AWS with Kubernetes and it turned out to be permissions. Had to create some extra roles. DK if that helps you but worth a look.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.