Can we scale the replicas of workers along with --concurrency flag?

Can we scale the replicas of workers along with --concurrency flag ? or we should scale the workers with concurrency flag only ?

Can anyone please answer this. ? may be @MutedJam :pray:

Hi @Pooja, I am not exactly familiar with cluster deployments I am afraid, so this would be one for @krynble to confirm for certain.

My understanding is that --concurrency=n simply specifies the number of jobs a single worker would take on rather than controlling replicasets.

Hey @Pooja these are two different behaviors. Deploying more instances of workers allows n8n to better use resources (since javascript is single threaded, for instance, running CPU Count * number of containers makes sense).

The concurrencly flag dictates how many jobs each worker will handle simultaneously. Depending on the type of workload you’re processing you might want to increase this number.

You should take into account the amount of memory your workflows would use (are you processing files? if so, memory can be an issue, otherwise, it’s hardly a problem). Also you should take into account how CPU intensive your workflows are.

If you are running intensive data transformation, then increasing concurrency might hurt performance, but if you are running mostly IO operations (communicate with APIs, read or write from databases) increasing concurrency can actually help improve throughput.

I hope this helps you understand how each scaling approach yields different results.

3 Likes

thanks alot @krynble :slight_smile: