Need help scaling n8n: executions bottleneck

Hello Team! I have a self-hosted n8n instance, running on a Kubernetes cluster. My problem is that after a certain number of concurrent Executions requests, the executions become extremely slow and do not progress: it feels like the queue is bottlenecking the process, although I am not certain.

What could I do to help with this?

Here is a screenshot of the executions:

Here are my nodes, that seem to handle everything just fine:

Here are the pods:

Hey @Lesterpaintstheworld , for the workers, have you defined the concurrency command arg on the pod spec? the default is 10, and if you have long running executions, bottlenecks are likely to happen.
With that low CPU usage, there is definitely plenty of room to bump up the concurrency to a much higher value.

1 Like

@netroy How can I set it properly in my helm configuration? I don’t know where to look for the proper way to structure the yaml.

n8n:
  n8n:
    concurrency: 100
    scaling:
      webhook:
        count: 10
      worker:
        concurrency: 100
        count: 30
    webhookResources:
      limits:
        cpu: 1
      requests:
        cpu: 50m
    workerResources:
      limits:
        cpu: 1
      requests:
        cpu: 50m

what helm chart are you using?

@netroy Plural.sh :

plural-artifacts/n8n at main · pluralsh/plural-artifacts (github.com)

Unfortunately I have no idea of what the yaml should look like

@Lesterpaintstheworld I’m not familiar with that repo, but from a quick look it doesn’t seem like that chart allows customizing concurrency.

Here is the chart that I’ve seen used the most, and it offers a lot more customization, including concurrency.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.