AI Agent Error When Tool Used

Describe the problem/error/question

Whenever I add a tool to an AI Agent in conjunction with an AWS Bedrock Chat Model, I get this error. This happens with a simple HTTP request added as a tool for the agent, a Salesforce tool, etc.

Problem in node ‘AI Agent‘

The model returned the following errors: messages.1: tool_use ids were found without tool_result blocks immediately after: tooluse_EMMzeKmkTierw_4wo21gUA. Each tool_use block must have a corresponding tool_result block in the next message.

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

{
“nodes”: [
{
“parameters”: {},
“type”: “n8n-nodes-base.manualTrigger”,
“typeVersion”: 1,
“position”: [
0,
0
],
“id”: “94bd7272-b8f8-4dd4-a8f7-8931981acbfd”,
“name”: “When clicking ‘Test workflow’”
},
{
“parameters”: {
“promptType”: “define”,
“text”: "Use the HTTP request tool ",
“options”: {
“passthroughBinaryImages”: false
}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.9,
“position”: [
440,
0
],
“id”: “f22cef13-e19e-4621-b4a9-1185cd09c13e”,
“name”: “AI Agent”
},
{
“parameters”: {
“model”: “={{ $vars.psz_claude_3_7_sonnet_model }}”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatAwsBedrock”,
“typeVersion”: 1,
“position”: [
240,
300
],
“id”: “fdbad5c3-531f-4d8d-ac60-b29d8398cae8”,
“name”: “AWS Bedrock Chat Model”,
“credentials”: {
“aws”: {
“id”: “GdCSzhzbNbOL0W1E”,
“name”: “AWS n8n Bedrock Access Key”
}
}
},
{
“parameters”: {
“url”: “=https://testorg.my.salesforce.com/services/data/v58.0/sobjects”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “salesforceOAuth2Api”,
“options”: {}
},
“type”: “n8n-nodes-base.httpRequestTool”,
“typeVersion”: 4.2,
“position”: [
600,
220
],
“id”: “55239b00-1e72-440b-a82c-ef370631e611”,
“name”: “HTTP Request”,
“credentials”: {
“salesforceOAuth2Api”: {
“id”: “h2Suh4P1ocJtfXrS”,
“name”: “Salesforce account 6”
}
}
}
],
“connections”: {
“When clicking ‘Test workflow’”: {
“main”: [
[
{
“node”: “AI Agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“AWS Bedrock Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“HTTP Request”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “d017d4534dd6255523c25a9fdf7c245c996b46a1550e68b36035c8572d5db390”
}
}

Information on your n8n setup

  • n8n version: 1.91.3
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: Mac Sequoia 15.5

The error suggests that the AI ​​Agent is generating a tool_use, but n8n is not automatically delivering the result as a tool_result to the model.

This can happen if:

  1. The tool is incorrectly connected or defined as an ai_tool.
  2. The AI ​​Agent or the LangChain Agent is not configured to handle the sequence correctly.
  3. The model version (Claude) does not support the exact pattern used by n8n.
  4. n8n v1.91.3 may still have bugs in the LangChain Agent + AWS Bedrock integration.

Verify that the AI ​​Agent node handles the result
Confirm that the node is configured to automatically wait for and forward results. Some versions of n8n required manual configuration.

Claude model may require explicit structure
Claude requires that the tool result be returned immediately in the next “response” — make sure the n8n AI Agent does this in the same cycle.

Manually add tool_result (if necessary)
This is not standard in n8n, but can be done with an intermediate Function node that returns the structure Claude expects to the model.

Review LangChain + Claude documentation
This behavior is aligned with how LangChain handles tools with Anthropic models. You may need to use a different chain that explicitly processes the result.

I am having the same issue using AWS Bedrock Chat Model. I am using Claude Sonnet 4 on Bedrock and the AI Agent node like the OP, on n8n 1.95.3 local via Docker. The problem also happens using Amazon Nova models.

Can anyone from the n8n team opine on this? This is now happening to our 3.7 model and we’re dead in the water with a number of our workflows now.

@bartv any chance you can help us out here?

Hi @moulderrc I have been trying to reproduce this with no luck.

Can you please share all the env variables you are setting (without any confidential values)?

Are there are specific inference profile and model combinations you have this problem with?

This seems to be fixed by the latest version. Please upgrade to latest beta version (1.102.2)

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.