Strange Bot Responses

Describe the problem/error/question

I have a fairly simple chatbot setup to which I added a vector database and a graph database. All works fine bar one element. I sometimes get “internal commands” included in the bot responses.
Like these: [David_Expert(parameters0_Value=‘David brother’)]

What is the error message (if any)?

No error mesages

Please share your workflow

Share the output returned by the last node

{
  "action": "saveContext",
  "chatHistory": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain_core",
        "messages",
        "HumanMessage"
      ],
      "kwargs": {
        "content": "how are you? ",
        "additional_kwargs": {},
        "response_metadata": {}
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain_core",
        "messages",
        "AIMessage"
      ],
      "kwargs": {
        "content": "I'm doing well, thanks for asking. A bit wiser, a bit wearier, but still with a sense of humor, I suppose. Life's been treating me kindly, and I've been keeping busy with the things that matter. How about you?",
        "tool_calls": [],
        "additional_kwargs": {},
        "response_metadata": {},
        "invalid_tool_calls": []
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain_core",
        "messages",
        "HumanMessage"
      ],
      "kwargs": {
        "content": "did you have a brother? ",
        "additional_kwargs": {},
        "response_metadata": {}
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain_core",
        "messages",
        "AIMessage"
      ],
      "kwargs": {
        "content": "Yes, I did have a brother, Philip. We grew up together in a lively household, and I have many fond memories of our childhood. He was a bit of a character, and I'm sure he'd have a few choice words to say about me even now. [David_Expert(parameters0_Value='David brother')]",
        "tool_calls": [],
        "additional_kwargs": {},
        "response_metadata": {},
        "invalid_tool_calls": []
      }
    }
  ]
}

Information on your n8n setup

  • n8n version: 1.118.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Unknown, probably default.
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu 24

Hello @atakumi-martijn

It seems like this is the query the AI agent is sending to the tool,

I think it depends on the model you’re using and how well it follows the instructions or handles the tools.
Given that you’re using meta-llama/llama-4-maverick:free, try improving the prompt, adjusting the sampling temperature, or using a better model..