AI Latency and Performance

Hey guys, I first just want to say how much I love n8n and the fact that you all have integrated Langchain now is a dream. I really want this to be the heart (or brain) of my stack for building AI agents. My only concern is whether or not n8n is cut out for the task. My primary concerns are:

  1. Latency. Naturally, when actually shipping chatbots or conversational agents, we’ll want to minimize the latency so the interaction feels more fluid and we don’t lose people.
  2. Scalability. I’m wondering if my workflows will actually be able to sustain hundreds of concurrent users.

I know n8n wasn’t built with this purpose in mind, but it seems clear to me that this use case may be one where n8n could really shine and enter a new stratosphere. I’m just wondering if any attention is being given to address those concerns.

1 Like

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @lhyphendixon, in general n8n is quite scalable, provided there are sufficient resources of course. We’re also working on supporting multiple main instances to further increase your options here.

That said, I am not sure if there are any load tests for Langchain specifically. @oleg do you have insights on this?