Scaling n8n: pods

Hello! I have an self-hosted n8n app, using Plural.sh .
The stack is: Terraform, Kubernetes (Helm), + redis & postgreSQL

I have trouble getting my app to scale. If I have more than say 50 concurrent workflows running, they seem to be blocked inddefinitely.

Here is my helm config file. I’m unsure if thi is how it’s supposed to look.

autoscaling:
  enabled: true
  maxReplicas: 100
  minReplicas: 10
  targetCPUUtilizationPercentage: 80
n8n:
  autoscaling:
    enabled: true
    maxReplicas: 100
    minReplicas: 10
    targetCPUUtilizationPercentage: 80
  env:
  - name: EXECUTIONS_PROCESS
    value: own
  - name: EXECUTIONS_MODE
    value: queue
  - name: EXECUTIONS_TIMEOUT
    value: 3600
  postgres:
    replicas: 1
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 100Mi
    storage:
      size: 25Gi
  resources:
    limits:
      memory: 30Gi
    requests:
      memory: 2Gi
  scaling:
    enabled: true
postgres:
  replicas: 1
  resources:
    limits:
      cpu: "2"
      memory: 1.5Gi
    requests:
      cpu: 100m
      memory: 100Mi
  storage:
    size: 1000Gi
webhook:
  autoscaling:
    enabled: true
    maxReplicas: 50
    minReplicas: 5
    targetCPUUtilizationPercentage: 80
worker:
  autoscaling:
    enabled: true
    maxReplicas: 150
    minReplicas: 15
    targetCPUUtilizationPercentage: 80

The pods in the n8n namespace:

The nodes:

Blocked executions:

I’m unsure whether Redis is actually used, or even if the config is applied at all. But I’m not seeing pod scaling at all.

What can I do?

Hi @Lesterpaintstheworld

Do you have any workers started?

2 it looks like:

I think my configuration is not working at all:

These workflow have been running for 200mn, but the config says:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.