Agent wait for content but tool respond with response | Cannot read properties of undefined reading content

Ai agent waited for “content” but the HTTP request tool responded with “response”
“{
“response”: “[\n {\n "id": 1,\n "name": "Operator A"\n },\n {\n "id": 2,\n "name": "Operator B"\n }\n]”
}”
is there any way to change the response to content or vice versa?
or any other solution.
:beetle:
[ERROR: Cannot read properties of undefined (reading ‘content’)]
:framed_picture:



(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
```json
{
  "nodes": [
    {
      "parameters": {
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "typeVersion": 1.1,
      "position": [
        0,
        0
      ],
      "id": "7469cfd5-9e85-4cd6-91ec-6cff97ae1970",
      "name": "When chat message received",
      "webhookId": "98e7f88e-211b-4d09-ae35-2f58e74ae60d"
    },
    {
      "parameters": {
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.agent",
      "typeVersion": 1.7,
      "position": [
        220,
        0
      ],
      "id": "e4826218-e6c4-46b7-a077-27eee0e5ccd1",
      "name": "AI Agent",
      "retryOnFail": false
    },
    {
      "parameters": {
        "toolDescription": "Get the list of operators.",
        "url": "http://127.0.0.1:8000/operators"
      },
      "type": "@n8n/n8n-nodes-langchain.toolHttpRequest",
      "typeVersion": 1.1,
      "position": [
        400,
        180
      ],
      "id": "10ce94f2-06c2-44a5-8a5e-79d90ef831e0",
      "name": "HTTP Request"
    },
    {
      "parameters": {
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.lmChatGroq",
      "typeVersion": 1,
      "position": [
        220,
        180
      ],
      "id": "4b5f86bd-abb9-4591-bb64-7c91bbc9379e",
      "name": "Groq Chat Model",
      "credentials": {
        "groqApi": {
          "id": "4kKx6XT7JZDkChBP",
          "name": "Groq account"
        }
      }
    }
  ],
  "connections": {
    "When chat message received": {
      "main": [
        [
          {
            "node": "AI Agent",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "HTTP Request": {
      "ai_tool": [
        [
          {
            "node": "AI Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "Groq Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "AI Agent",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    }
  },
  "pinData": {},
  "meta": {
    "templateCredsSetupCompleted": true,
    "instanceId": "06a1dc1315b2c9721353898215ed448422f9ffef586398d84a81f1de09229fe2"
  }
}

## Share the output returned by the last node
{
  "response": "[\n  {\n    \"id\": 1,\n    \"name\": \"Operator A\"\n  },\n  {\n    \"id\": 2,\n    \"name\": \"Operator B\"\n  }\n]"
}
<!-- If you need help with data transformations, please also share your expected output. -->

## Information on your n8n setup
- **n8n version: 1.77.0** 
- **Running n8n via: local self-hosted -g**
- **Operating system: Win 11**

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
  • n8n version: 1.77.0
  • Database (default: SQLite): no database
  • n8n EXECUTIONS_PROCESS setting (default: own, main): IDK
  • Running n8n via (Docker, npm, n8n cloud, desktop app): self-hosted Local -g
  • Operating system: Windows 11

Hello, You can try to change the AI model. In the case of llama Versatile 70B i think it will work.
My idea (i am not sure at all) is that for ttols agent you have to use and advanced ai model
Good luck!

3 Likes

Hi again, if you are using a nore basic model, instruct the model in the agent prompt to use a property called “Response” when the agent calls the tool. It also work

1 Like

@Pp_Pp Hi, okay i will try your advice thanks :heart: .

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.