How to scale Ai chatbot for production ready?

I’m curious about the maximum number of concurrent connections per/resource
is it limited to ram cpu or n8n itself? how to determine it

for example 100 concurrent user with pinecone ai agent chatbot what cpu ram i/o will be used to handle this scenario

Would you like to share the solutions?

thank you very much

Hi,

This is a broad question and depends largely on what you want to do.

I don’t think there a ready answer for that (maybe some people have experience but that might not directly translate to your use case). the only thing that you could do is build a small scale solution test it/scale it. Only then you can understand which component becomes a bottle-neck (better than just accepting/reading things and making wrong assumptions )

you can use the following as ref

reg.
J.

1 Like

yeah I think it’s about to be the dump question I’m planning to test it on ecs but just wonder if someone has got an experience using it as the chatbot as service with some VPSes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.