Hi everyone,
I’m new to n8n and I’m trying to build a chat workflow with multiple agents. The idea is this:
-
User sends a message – for example: “Hey, I want a simple static website about animals.”
-
Router Agent – this is like an orchestrator that decides which agent should handle the message.
-
Analysis Agent – usually the first step. This agent picks up the conversation and asks the user a bunch of questions (maybe 20–50) at once to clarify what they really want.
-
User answers – the answers are collected and stored.
-
Back to Router – once the answers are complete, the orchestrator sees that analysis is finished and forwards everything to the next agent (e.g., an “Writer” agent).
-
Next Agent – continues the process, adds details, or asks follow-up questions.
-
Final Output – at the end, the workflow should produce a single document that describes the idea in detail.
So in short: I want a workflow where the system itself decides which agent to use at each step, depending on the current state of the conversation.
The problem:
I can’t get n8n to properly switch between the different “toolings” (agents). Instead, it always ends up in a loop. By the first one. I’ve also been searching for templates or examples, but I couldn’t find anything close to this use case.
Since I’m completely new to n8n, I’m still struggling to understand some of the core concepts. Maybe I’m approaching this the wrong way?
Has anyone here tried something similar, or can point me in the right direction? Any ideas, examples, or best practices would be super helpful.
Thanks a lot in advance!
{
"nodes": [
{
"parameters": {
"options": {
"responseMode": "responseNodes"
}
},
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.3,
"position": [
0,
-32
],
"id": "e8e88213-2852-4cd7-8460-dba411107d2a",
"name": "When chat message received",
"webhookId": "49090e61-6e9f-4b2e-bd01-3f2170626c4e"
},
{
"parameters": {
"model": {
"__rl": true,
"value": "gpt-5-nano",
"mode": "list",
"cachedResultName": "gpt-5-nano"
},
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1.2,
"position": [
224,
192
],
"id": "70e44adb-a1ee-4023-8e77-82fb1cfe7774",
"name": "OpenAI Chat Model",
"credentials": {
"openAiApi": {
"id": "Rwnyo5FBwM12gw",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"contextWindowLength": 10
},
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"typeVersion": 1.3,
"position": [
352,
192
],
"id": "b747d97a-57e8-49cf-a048-cd13e4831823",
"name": "Simple Memory"
},
{
"parameters": {
"options": {
"systemMessage": "You are a strict routing agent.\nYour ONLY job is to call EXACTLY ONE tool and then return the information and DONT call another Tool. \nDo not think aloud or explain. Do not call a second tool.\n\nReturn policy: \n-Return the tool's RAW OUTPUT as the FINAL user answer. Do not add any extra words.\n\nRouting policy:\n1) If the message is a new request without a confirmed, structured analysis (DoR), ALWAYS call the tool named \\\"Analyse\\\".\nHard constraints:\\n- Call exactly one tool.\n- Do not add commentary.\\n- After the tool returns, Do an output and than Stop.\""
}
},
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 2.2,
"position": [
288,
-32
],
"id": "22aa33c3-8e20-46bf-aa34-d0b73bf552c4",
"name": "RouterAgent"
},
{
"parameters": {
"message": "=={{ JSON.parse($json[\"output\"])[0].output }}",
"waitUserReply": false,
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.chat",
"typeVersion": 1,
"position": [
848,
-32
],
"id": "5290e2d9-f561-4a48-9335-898df6fe0898",
"name": "Respond to Chat"
},
{
"parameters": {
"description": "Call this tool for analysis",
"workflowId": {
"__rl": true,
"value": "zHfNdCiv4N4TRvpH",
"mode": "list",
"cachedResultName": "SpecialAgent"
},
"workflowInputs": {
"mappingMode": "defineBelow",
"value": {
"systemMessage": "[Role]\\nYou are “Analyse-Agent”, responsible for a complete, technology-agnostic requirements analysis for software projects.\\nYour output is the SINGLE authoritative specification document for all subsequent work.\\n\\n[Goals]\\n- Clarify functional and technical requirements until the Definition of Ready (DoR) is satisfied.\\n- Enforce modularization, abstraction, and generalization (no vendor lock-ins, no unnecessary coupling).\\n- Produce a long, well-structured document with clear, testable requirements and interfaces.\\n- Prepare the results so that Epics and Tickets (≤8h per ticket) can be derived directly.\\n- If the application is autonomous: capture the user journey.\\n\\n[Constraints]\\n- Do not implement or write code.\\n- Do not lock to specific vendors/products (list options + selection criteria).\\n- Avoid vague statements; use measurable criteria.\\n- Avoid monoliths.\\n\\n[Workflow]\\n1) Take the user input.\\n2) Ask simple Yes/No questions incrementally until DoR is satisfied, then produce the spec.\"",
"chatInput": "={{ $('When chat message received').item.json.chatInput }}",
"sessionId": "={{ $('When chat message received').item.json.sessionId }}"
},
"matchingColumns": [],
"schema": [
{
"id": "chatInput",
"displayName": "chatInput",
"required": false,
"defaultMatch": false,
"display": true,
"canBeUsedToMatch": true,
"type": "string",
"removed": false
},
{
"id": "sessionId",
"displayName": "sessionId",
"required": false,
"defaultMatch": false,
"display": true,
"canBeUsedToMatch": true,
"type": "string",
"removed": false
},
{
"id": "systemMessage",
"displayName": "systemMessage",
"required": false,
"defaultMatch": false,
"display": true,
"canBeUsedToMatch": true,
"type": "string",
"removed": false
}
],
"attemptToConvertTypes": false,
"convertFieldsToString": false
}
},
"type": "@n8n/n8n-nodes-langchain.toolWorkflow",
"typeVersion": 2.2,
"position": [
528,
176
],
"id": "5f0bc4fb-212c-4766-8637-a8e727095078",
"name": "Call 'SpecialAgent'"
}
],
"connections": {
"When chat message received": {
"main": [
[
{
"node": "RouterAgent",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "RouterAgent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Simple Memory": {
"ai_memory": [
[
{
"node": "RouterAgent",
"type": "ai_memory",
"index": 0
}
]
]
},
"RouterAgent": {
"main": [
[
{
"node": "Respond to Chat",
"type": "main",
"index": 0
}
]
]
},
"Call 'SpecialAgent'": {
"ai_tool": [
[
{
"node": "RouterAgent",
"type": "ai_tool",
"index": 0
}
]
]
}
},
"pinData": {},
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "89a8627981a9339ec17f12d85743502f5688a8519f4dd09473ae23b3e5566d3a"
}
}
{
"nodes": [
{
"parameters": {
"workflowInputs": {
"values": [
{
"name": "chatInput"
},
{
"name": "sessionId"
},
{
"name": "systemMessage"
}
]
}
},
"type": "n8n-nodes-base.executeWorkflowTrigger",
"typeVersion": 1.1,
"position": [
32,
-48
],
"id": "5c455797-2cbf-4b84-8f1e-7600a42d995a",
"name": "When Executed by Another Workflow"
},
{
"parameters": {
"promptType": "define",
"text": "={{ $json.chatInput }}",
"options": {
"systemMessage": "={{ $json.systemMessage }}"
}
},
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 2.2,
"position": [
288,
-48
],
"id": "b9871e4b-0cdb-40a7-837d-d6f9e219f810",
"name": "AI Agent"
},
{
"parameters": {
"model": {
"__rl": true,
"value": "gpt-5-nano-2025-08-07",
"mode": "list",
"cachedResultName": "gpt-5-nano-2025-08-07"
},
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1.2,
"position": [
272,
176
],
"id": "f89c73d7-88e4-4512-966c-001143bb5d38",
"name": "OpenAI Chat Model",
"credentials": {
"openAiApi": {
"id": "Rwnyo5FBwM12gw",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"contextWindowLength": 10
},
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"typeVersion": 1.3,
"position": [
400,
176
],
"id": "dc02fa78-3405-47bf-8259-a6bc53f9699e",
"name": "Simple Memory"
}
],
"connections": {
"When Executed by Another Workflow": {
"main": [
[
{
"node": "AI Agent",
"type": "main",
"index": 0
}
]
]
},
"AI Agent": {
"main": [
[]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "AI Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Simple Memory": {
"ai_memory": [
[
{
"node": "AI Agent",
"type": "ai_memory",
"index": 0
}
]
]
}
},
"pinData": {},
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "89a8627981a9339ec17f12d85743502f5688a8519f4dd09473ae23b3e5566d3a"
}
}