Horizontal or Vertical scaling questions

Hi all :wave:

I am revisiting the use of n8n after know the scaling capability. Watch the video that Tanay and Omar demo queue mode on a local machine. The video was very good but there are a few missing parts.
My questions:

  1. The documentation indicate a couple of options for the cloud server deployment. I am particularly looking at Digital Ocean and AWS EKS. I was wondering if an instance of n8n is deployed on a fairly large Digital Ocean droplet (4cpu 8gb), will the demo method works well for example scaling vertically, spinning up 7 docker images (main + redis + postgres + 3 workers + 1 webhook)? or I should go the horizontal route of multiple 7 small droplets, one droplet for each of the services?

  2. I understand that EKS is the official documentation for AWS installation. Will you suggest this will be the best route for scaling? I was thinking of ECS over EKS due to the additional cost as this is just an evaluation project for potential production deployment

  3. I am fairly ok with server deployment but definitely not knowledgeable on Kubernetes but the idea of tinkering with them is interesting. I understand there is a paid cloud option but due the limits of 25 active workflows and 60k executions I find it a bit constraining though i don’t think I will hit the limit soon. Although there are some learning curve with n8n, but I really like to explore the potential of deploying n8n (paid plan) in replacement of one of our services. Let me know what are our options. If possible PM me the enterprise plan details.

Best Regards,

1 Like

Hi @leonardchiu

I haven’t seen this video you are talking about, so will definitely watch that. :slight_smile:
But I do have some answers for you, not saying everything I say is 100% correct but want to nudge you in the right direction.

1 scaling setup is meant for multiple servers, to spread the load of n8n workflows. Normally with small instances it is not necessary. I only use it when I know the instance is going to scale later on (or when it already needs it), to make it easier in the long run.
If 25 workflows is not something you are going to hit any time soon, you might be happier going with a normal non queue mode server. This of course depends on the number of executions and data load.

My setup is normally on EC2 (when using AWS) using a docker instance on there.
I like to develop en use custom nodes, so not sure if you can actually use a container server. As far as I have seen, you could not SSH into it or add persistent storage with a container service on AWS, so that makes custom nodes a bit annoying. Unless you create a custom image.

For a standard instance, I just put everything on 1 docker instance, often giving it a bit more resources to make sure we have some headroom.

For queue instances. I prefer to have a Helper server housing all the extra struf like Redis, postgres and all other things you might want to use like RabbitMQ. Of course also an option for using AWS database services etc.
Next to that Helper server we then have a server per main, worker etc.
Make sure to have 2 vCPU minimum per server, as we found an issue where the less than two can cause some issues when running with larger datasets. (Workflow on worker taking a lot more memory than on main)

2 Of course there cannot be documentation for every option out there, so that is probably the reason it isn’t there.

3 If you need more executions and/or workflows you can ask the n8n team for the pricing I think. So that should not stop you from going with the cloud offering. :slight_smile:

If you require any mentoring or support with the deployment and/or the development of workflows, you can of course hire an expert. It can save you a lot of time. :wink:
(you can of course PM me if you want to talk about this last part)

1 Like

Thanks for the long reply.

Yes you rightfully pointed out that I probably don’t need to scale much if I can’t hit the limit of the 25 workflows 60k execution tier but to be able to setup up an n8n instance with postgres ready to scale is an appealing thought.

Setting up the standard instance is ok for me. I had a few running in my overkilled NAS on VMs using docker. It is the queue instance setup that i am interested in.

Yes you are right that I should probably goes with the cloud offerings if I have a mission critical production workflows but currently I am just evaluating my options.

I’ll PM you to see if I can afford to save time. :wink:

1 Like

Just making sure queue mode is needed.
There are plenty of users that setup queue mode where there is no reason for it. :wink:

Did not say you should go with cloud. But it is the easiest option. And the limits can probably be extended if needed, so you do not have to worry about the limits listed.
If/when the cloud starts offering the ability to use community nodes, this might actually be the best option for a lot of users.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.