Describe the problem/error/question
I have a queue mode setup using kubernetes.
Everytime I deploy an update, or change env variables on the deployment, my main process pod is deleted, and then recreated, causing a 30-60 seconds downtime.
To overcome this issue, i would love to have multiple instances on the main process. (similar to what is described on Configuring queue mode | n8n Docs ), but for a different purpose.
The problem I is that a have a persistent volume that can only be attached to one k8s pod at a time, so if i create a second (o more) pod, this will not be able to attach the volume and crashes.
So: i want to prevent this issue by not using persistent volumes at all, rather than trying to find a way to connect mutiple pods to the same volume.
Note: this persisted volume is mounted on /home/user/.n8n, as decribed in the docs.
I know that the encryption key is generated as a file in there, but as i am already using queue mode, i have that key on environment variables on all the pods, so that should not be an issue. Can i just remove the volumne and it should work? I’m missing something?
main-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: chronos-n8n
name: chronos-n8n
namespace: chronos
spec:
replicas: 1
selector:
matchLabels:
service: chronos-n8n
strategy:
type: Recreate
template:
metadata:
labels:
service: chronos-n8n
spec:
initContainers:
- name: volume-permissions
image: busybox:1.36
command: ["sh", "-c", "chown 1000:1000 /data"]
volumeMounts:
- name: n8n-claim0
mountPath: /data
containers:
- command:
- /bin/sh
args:
- -c
- sleep 5; n8n start
envFrom:
- configMapRef:
name: n8n-config
env:
- name: DB_POSTGRESDB_HOST
value: "${DB_POSTGRESDB_HOST}"
- name: DB_POSTGRESDB_DATABASE
value: "${DB_POSTGRES_DATABASE}"
- name: DB_POSTGRESDB_USER
value: "${N8N_USER}"
- name: DB_POSTGRESDB_PASSWORD
value: "${DB_POSTGRESDB_PASSWORD}"
- name: WEBHOOK_URL
value: "${WEBHOOK_URL}"
image: $IMAGE_NAME
name: n8n
ports:
- containerPort: 5678
readinessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
resources:
requests:
memory: "12500Mi"
limits:
memory: "12500Mi"
volumeMounts:
- mountPath: /home/node/.n8n
name: n8n-claim0
restartPolicy: Always
volumes:
- name: n8n-claim0
persistentVolumeClaim:
claimName: n8n-claim0
- name: n8n-secret
secret:
secretName: n8n-secret
- name: postgres-secret
secret:
secretName: postgres-secret
worker-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: chronos-n8n-worker
name: chronos-n8n-worker
namespace: chronos
spec:
replicas: 2 # You can adjust this based on your workload needs
selector:
matchLabels:
service: chronos-n8n-worker
strategy:
type: Recreate
template:
metadata:
labels:
service: chronos-n8n-worker
spec:
containers:
- command:
- /bin/sh
args:
- -c
- sleep 5; n8n worker
envFrom:
- configMapRef:
name: n8n-config
image: $IMAGE_NAME
name: n8n-worker
ports:
- containerPort: 5678
readinessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2"
initContainers:
- name: task-runner
image: n8nio/runners:1.113.2
restartPolicy: Always
env:
- name: N8N_RUNNERS_AUTH_TOKEN
value: "${N8N_RUNNERS_AUTH_TOKEN}"
restartPolicy: Always
volumes:
- name: n8n-secret
secret:
secretName: n8n-secret
- name: postgres-secret
secret:
secretName: postgres-secret
Information on your n8n setup
- n8n version: 1.113.2
- Database (default: SQLite): postgres
- n8n EXECUTIONS_PROCESS setting (default: own, main): N/A
- Running n8n via (Docker, npm, n8n cloud, desktop app): k8s
- Operating system: linux