Issue with parallel pattern when using AI Agent tool call

I implemented the parallel pattern as described in the n8n documentation (using callbacks). It works fine, except when I call the main flow using a tool that is called by an AI agent.

What happens is that the respond to webhook output is returned immediately back to the AI agent, so there is effectively no wait for the main flow to finish. And I need the agent to wait for the workflow to finish, before continuing.

What am I missing? Is this a bug, or is there a possible workaround?

The workflows:

{
“nodes”: [
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
0,
0
],
“id”: “cd5d4c62-b47b-4647-a313-ff5cf10a41f5”,
“name”: “When chat message received”,
“webhookId”: “5b8d6b75-6ab4-42a9-a9eb-005f2bb7161f”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.8,
“position”: [
220,
0
],
“id”: “6b3b8cdd-8441-4681-9787-0cffa597bc50”,
“name”: “AI Agent”
},
{
“parameters”: {
“model”: “gpt-4o-mini”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatAzureOpenAi”,
“typeVersion”: 1,
“position”: [
260,
220
],
“id”: “942c573e-1f95-4108-81b5-69ee54318971”,
“name”: “Azure OpenAI Chat Model”,
“credentials”: {
“azureOpenAiApi”: {
“id”: “iKY0wB5wcFToSq7t”,
“name”: “Azure gpt-40-mini”
}
}
},
{
“parameters”: {
“name”: “pp_tool”,
“workflowId”: {
“__rl”: true,
“value”: “iue5rBXxnFXZd8MK”,
“mode”: “list”,
“cachedResultName”: “Parallel Exec with wait MAIN flow - template”
},
“workflowInputs”: {
“mappingMode”: “defineBelow”,
“value”: {},
“matchingColumns”: ,
“schema”: ,
“attemptToConvertTypes”: false,
“convertFieldsToString”: false
}
},
“type”: “@n8n/n8n-nodes-langchain.toolWorkflow”,
“typeVersion”: 2.1,
“position”: [
380,
220
],
“id”: “334d73cc-1568-47a0-8a45-89e2d8560be1”,
“name”: “Call n8n Workflow Tool”
}
],
“connections”: {
“When chat message received”: {
“main”: [
[
{
“node”: “AI Agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“Azure OpenAI Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Call n8n Workflow Tool”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“instanceId”: “58aa91dfb81527f3ab9d46e751edfcf1e8801ddcdd145e7779c36f37dcdfa475”
}
}

{
“nodes”: [
{
“parameters”: {
“jsCode”: “// an example item for the loop/split out\nconst myData = [\n {id: 1, content: ‘String one’},\n {id: 2, content: ‘String two’}\n];\nreturn myData;”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
280,
0
],
“id”: “2b3aa72b-771a-45a6-8ab3-9a4182d95e2f”,
“name”: “Generate item with x objects”
},
{
“parameters”: {
“options”: {}
},
“type”: “n8n-nodes-base.splitInBatches”,
“typeVersion”: 3,
“position”: [
500,
0
],
“id”: “dc3135df-414b-4d94-a92c-d10f29df2b49”,
“name”: “Loop Over Items”
},
{
“parameters”: {
“resume”: “webhook”,
“httpMethod”: “POST”,
“responseMode”: “responseNode”,
“options”: {}
},
“type”: “n8n-nodes-base.wait”,
“typeVersion”: 1.1,
“position”: [
1340,
120
],
“id”: “04bbb998-967d-4b7d-b0ec-315acb04efa0”,
“name”: “Listen for subflow callback”,
“webhookId”: “1e99daa4-e050-4a46-aba5-8b68bcfc6e49”
},
{
“parameters”: {
“method”: “POST”,
“url”: “https://n8n-host/webhook/parallel-webhook-call”,
“sendHeaders”: true,
“headerParameters”: {
“parameters”: [
{
“name”: “=callbackurl”,
“value”: “={{ $execution.resumeUrl }}”
}
]
},
“sendBody”: true,
“bodyParameters”: {
“parameters”: [
{
“name”: “id”,
“value”: “={{ $json.id }}”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 4.2,
“position”: [
720,
160
],
“id”: “eccbaac8-94e0-48bb-ae6f-a435f6a3ac8b”,
“name”: “HTTP call subflow”
},
{
“parameters”: {
“jsCode”: “let result = $(‘Listen for subflow callback’).first().json.body;\n\nlet json = $(‘test endResult cnt’).first().json;\nif (!json.endResult) json.endResult = ; //init\n\njson.endResult.push(result);\n\nreturn [json];”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
1520,
120
],
“id”: “6928941d-f850-47ed-bf13-6c4b140b86ce”,
“name”: “Add execution result”
},
{
“parameters”: {
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “strict”,
“version”: 2
},
“conditions”: [
{
“id”: “99603d4a-4e96-4df9-b1bd-4f1cda6d8444”,
“leftValue”: “={{ $json.endResult.length }}”,
“rightValue”: “={{ $(‘Generate item with x objects’).all().length }}”,
“operator”: {
“type”: “number”,
“operation”: “equals”
}
}
],
“combinator”: “and”
},
“options”: {}
},
“type”: “n8n-nodes-base.if”,
“typeVersion”: 2.2,
“position”: [
1120,
40
],
“id”: “b2419ac4-fdea-4b7b-a7a2-e15903ced7da”,
“name”: “test endResult cnt”
},
{
“parameters”: {
“options”: {}
},
“type”: “n8n-nodes-base.respondToWebhook”,
“typeVersion”: 1.1,
“position”: [
1700,
120
],
“id”: “f6d4ce81-a994-45b1-abac-e45518e98b7a”,
“name”: “Ack subflow”
},
{
“parameters”: {},
“type”: “n8n-nodes-base.wait”,
“typeVersion”: 1.1,
“position”: [
1700,
-120
],
“id”: “52dcc50f-08be-4648-9c63-2f416b6afca5”,
“name”: “Yes, we made it”,
“webhookId”: “af3a03a0-5276-4358-b21d-4d7975171265”
},
{
“parameters”: {
“content”: “## Call parallel subflows \nUses http call node",
“height”: 640,
“width”: 940
},
“type”: “n8n-nodes-base.stickyNote”,
“typeVersion”: 1,
“position”: [
-40,
-200
],
“id”: “b50ada1d-4abf-4b57-b364-cf12e74ebe04”,
“name”: “Sticky Note”
},
{
“parameters”: {
“content”: "## Wait for subflows to report back in\n
jib”,
“height”: 640,
“width”: 940
},
“type”: “n8n-nodes-base.stickyNote”,
“typeVersion”: 1,
“position”: [
1000,
-200
],
“id”: “a197a4ec-04b7-4a78-a8bf-937b43871e7d”,
“name”: “Sticky Note1”
},
{
“parameters”: {
“options”: {}
},
“type”: “n8n-nodes-base.respondToWebhook”,
“typeVersion”: 1.1,
“position”: [
1920,
-120
],
“id”: “8c14a086-7389-4bcc-8166-b1c080707231”,
“name”: “Respond to Webhook”
},
{
“parameters”: {
“inputSource”: “passthrough”
},
“type”: “n8n-nodes-base.executeWorkflowTrigger”,
“typeVersion”: 1.1,
“position”: [
0,
0
],
“id”: “efc1986a-12c6-46bb-a013-22e3c528867a”,
“name”: “When Executed by Another Workflow”
}
],
“connections”: {
“Generate item with x objects”: {
“main”: [
[
{
“node”: “Loop Over Items”,
“type”: “main”,
“index”: 0
}
]
]
},
“Loop Over Items”: {
“main”: [
[
{
“node”: “test endResult cnt”,
“type”: “main”,
“index”: 0
}
],
[
{
“node”: “HTTP call subflow”,
“type”: “main”,
“index”: 0
}
]
]
},
“Listen for subflow callback”: {
“main”: [
[
{
“node”: “Add execution result”,
“type”: “main”,
“index”: 0
}
]
]
},
“HTTP call subflow”: {
“main”: [
[
{
“node”: “Loop Over Items”,
“type”: “main”,
“index”: 0
}
]
]
},
“Add execution result”: {
“main”: [
[
{
“node”: “Ack subflow”,
“type”: “main”,
“index”: 0
}
]
]
},
“test endResult cnt”: {
“main”: [
[
{
“node”: “Yes, we made it”,
“type”: “main”,
“index”: 0
}
],
[
{
“node”: “Listen for subflow callback”,
“type”: “main”,
“index”: 0
}
]
]
},
“Ack subflow”: {
“main”: [
[
{
“node”: “test endResult cnt”,
“type”: “main”,
“index”: 0
}
]
]
},
“Yes, we made it”: {
“main”: [
[
{
“node”: “Respond to Webhook”,
“type”: “main”,
“index”: 0
}
]
]
},
“When Executed by Another Workflow”: {
“main”: [
[
{
“node”: “Generate item with x objects”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“instanceId”: “58aa91dfb81527f3ab9d46e751edfcf1e8801ddcdd145e7779c36f37dcdfa475”
}
}

{
“nodes”: [
{
“parameters”: {
“jsCode”: “// Loop over input items and add a new field called ‘myNewField’ to the JSON of each one\nfor (const item of $input.all()) {\n item.json.addition = ‘Data added by subflow’;\n}\n\nreturn $input.all();”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
440,
0
],
“id”: “012a8cb9-a387-48ba-bc37-e0bec572f884”,
“name”: “Generate subflow data (do stuff)”
},
{
“parameters”: {
“httpMethod”: “POST”,
“path”: “parallel-webhook-call”,
“responseMode”: “responseNode”,
“options”: {}
},
“type”: “n8n-nodes-base.webhook”,
“typeVersion”: 2,
“position”: [
0,
0
],
“id”: “1c94229e-35de-4fe4-ae33-6cd907667d6d”,
“name”: “Webhook”,
“webhookId”: “b115cd5b-8624-47fb-a710-a3fd08aa8326”
},
{
“parameters”: {
“method”: “POST”,
“url”: “={{ $(‘Webhook’).item.json.headers.callbackurl }}”,
“sendBody”: true,
“bodyParameters”: {
“parameters”: [
{
“name”: “returnStatus”,
“value”: “=1”
},
{
“name”: “id”,
“value”: “={{ $(‘Webhook’).item.json.body.id }}”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 4.2,
“position”: [
900,
0
],
“id”: “d9305549-d610-4307-a66d-1afda9155963”,
“name”: “Send result back to main flow”,
“retryOnFail”: true
},
{
“parameters”: {
“options”: {}
},
“type”: “n8n-nodes-base.respondToWebhook”,
“typeVersion”: 1.1,
“position”: [
220,
0
],
“id”: “f8939365-0cfd-4159-8e3e-6f34f98dddac”,
“name”: “Respond to Webhook”
},
{
“parameters”: {
“amount”: “={{ Math.ceil(Math.random() * (5 - 2) + 2) }}”
},
“type”: “n8n-nodes-base.wait”,
“typeVersion”: 1.1,
“position”: [
660,
0
],
“id”: “666119fe-9cf7-41fa-9e71-b3ea5be1fb91”,
“name”: “Wait random 2-5 secs”,
“webhookId”: “d4515c9b-5b72-47d8-af36-550dd58e103e”
}
],
“connections”: {
“Generate subflow data (do stuff)”: {
“main”: [
[
{
“node”: “Wait random 2-5 secs”,
“type”: “main”,
“index”: 0
}
]
]
},
“Webhook”: {
“main”: [
[
{
“node”: “Respond to Webhook”,
“type”: “main”,
“index”: 0
}
]
]
},
“Send result back to main flow”: {
“main”: [

]
},
“Respond to Webhook”: {
“main”: [
[
{
“node”: “Generate subflow data (do stuff)”,
“type”: “main”,
“index”: 0
}
]
]
},
“Wait random 2-5 secs”: {
“main”: [
[
{
“node”: “Send result back to main flow”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“instanceId”: “58aa91dfb81527f3ab9d46e751edfcf1e8801ddcdd145e7779c36f37dcdfa475”
}
}

Information on your n8n setup

  • n8n version: Version 1.88.0
  • Database: sqlite
  • n8n EXECUTIONS_PROCESS default:
  • Running n8n via Docker compose:
  • Operating system: Debian Linux 12

From what little I understood looks like your workflow is returning results quickly before the tasks are actually finished. The Langchain tool in n8n is designed to reply fast so that AI doesnt hang or wait too long. I was thinking instead of putting all your parallel tasks directly in the AI triggered workflow move all the parallel tasks to a separate sub workflow and call this inside the AI triggered workflow in a way that it waits until everything is done before replying

1 Like

Thanks for the suggestion. I tried it, but the behaviour is the same when I implement a caller flow, even if I set wait for subflow to complete.

What happens is that the Wait node (set to resume on webhook call) returns data, causing this behaviour.

I’ve implemented the following example from the documentation, where the “Webhook Callback Wait” clearly returns data to the calling workflow on the first callback that is received.

Parallel pattern example: Pattern for Parallel Sub-Workflow Execution Followed by Wait-For-All Loop | n8n workflow template

I think in the example the main workflow does trigger the sub workflow but after the loop there is an IF node which checks whether all the jobs have been executed or not and then it proceeds ahead. If you could give a screenshot of your workflow that would help but I am guessing that you are using a Wait node that resumes when any one callback hits but the example doesnt use the Wait node that way. In the example if you see they check in the if condition if all jobs are complete , if not then wait, its more like a polling functionality which will exit only when all the jobs are executed.

Here are some screenshots. What happens is that at the moment the first “Listen for subflow callback” receives data, an empty object is sent to the trigger flow. I verified that by putting a 10 second wait before that node… doing a run and comparing that to a run with a 10 second node after.

Trigger flow:

Main flow:

Sub flow: