Cannot read properties of undefined (reading 'content')

I have been stuck at these for a long time. can anyone please help me out

The issue is that whenever I connect any tool, this error pops up.


Cannot read properties of undefined (reading ‘content’)

Information on your n8n setup
AI Agent node version 1.8 (Latest)
Database: IDK
Using on n8n Cloud.

Hi, I dont think SerpAPI is an official node. Did you set the environment variable N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE to true ?

Ow sorry, I just saw that you are on cloud? (and my statement is actually invalid) I’m actually not sure if this workflow is supported like this? maybe someone of n8n give some input

Can you please open that Output tab, and share the stack-trace that gets displayed in the UI?

n8n version

1.85.4 (Cloud)

Stack trace

TypeError: Cannot read properties of undefined (reading 'content') at ToolCallingAgentOutputParser._baseMessageToString (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/output_parsers/base.cjs:24:31) at ToolCallingAgentOutputParser._callWithConfig (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/output_parsers/base.cjs:49:32) at ToolCallingAgentOutputParser._callWithConfig (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:223:34) at processTicksAndRejections (node:internal/process/task_queues:95:5) at ToolCallingAgentOutputParser._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:165:9) at ToolCallingAgentOutputParser.transform (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:402:9) at RunnableSequence._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1320:30) at RunnableSequence.transform (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:402:9) at wrapInputForTracing (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:275:30) at pipeGeneratorWithSetup (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/utils/stream.cjs:271:19) at RunnableLambda._transformStreamWithConfig (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:296:26) at wrapInputForTracing (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:275:30) at pipeGeneratorWithSetup (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/utils/stream.cjs:271:19) at RunnableLambda._transformStreamWithConfig (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:296:26) at RunnableSequence._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1320:30)

Hi,

Same error here. I am using Http Request node.

TypeError: Cannot read properties of undefined (reading 'content') at ToolCallingAgentOutputParser._baseMessageToString (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/output_parsers/base.cjs:24:31) at ToolCallingAgentOutputParser._callWithConfig

unfortunately these stacktraces aren’t as helpful as I wanted them to be. All I can do is narrow down the error to this line, deep in langchain’s codebase.

Can one of you please try to create a smallest possible workflow, where you can reliably reproduce this issue, and then share the workflow, and the chat prompt with us :pray: ?

I’ve got the same issue

{
  "errorMessage": "Cannot read properties of undefined (reading 'content')",
  "errorDetails": {},
  "n8nDetails": {
    "n8nVersion": "1.86.0 (Self Hosted)",
    "binaryDataMode": "filesystem",
    "stackTrace": [
      "TypeError: Cannot read properties of undefined (reading 'content')",
      "    at ToolCallingAgentOutputParser._baseMessageToString (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/output_parsers/base.cjs:24:31)",
      "    at ToolCallingAgentOutputParser._callWithConfig (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/output_parsers/base.cjs:49:32)",
      "    at ToolCallingAgentOutputParser._callWithConfig (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:223:34)",
      "    at processTicksAndRejections (node:internal/process/task_queues:95:5)",
      "    at ToolCallingAgentOutputParser._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:165:9)",
      "    at ToolCallingAgentOutputParser.transform (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:402:9)",
      "    at RunnableSequence._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1320:30)",
      "    at RunnableSequence.transform (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:402:9)",
      "    at wrapInputForTracing (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:275:30)",
      "    at pipeGeneratorWithSetup (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/utils/stream.cjs:271:19)"
    ]
  }
}

For me this doesn’t always happen and doesn’t happen for each agent. I can reproduce it in my current workflow with 1 specific agent. The reproduction is not always the same.

I think I might have found the issue. One of my (workflow)tools is giving back a parsed JSON structure. For some reason the AI agent can’t handle this. When I switched off the parsed output from my tool and just let the output be a string, the error doesn’t appear


Edit: after a couple of interactions with my workflow and AI agent, the error message is back…


Edit2: I keep on editing as I progress in debugging to save everyone some time :joy: I switched my LLM model from Gemini 2.5 Pro to GPT 4.5 and now the error is also gone. Let’s see for how long this will last.

1 Like

Lol. I also tried changing LLM. The error was gone for a while, but it came back. :rofl: :weary:

For me, it did work. I switched back, and immediately I got the error. Using GPT 4.5 worked; I’m now going to try out GPT4o

Hey everyone :wave:
Sorry for bringing this post back from the dead but just wanted to contribute should someone else come across this in the future.

I encountered the same error when I was trying to return a non-stream response - naively thought it was the same chat completion json response. Suffice to say your LLM server must support streaming before you can use it for the agent.

The expected response from the LLM server is text/plain (not json!) and looks like the following. This is what Langchain is looking for and ultimately parses the response from.

Note: OpenAI’s responses API returns a completely different streaming format!

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":" How"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":" can"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":" I"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":" assist"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":" you"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":" today"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null}

data: {"id":"chatcmpl-BYWXY...","object":"chat.completion.chunk","created":174...,"model":"gpt-4o-mini","service_tier":"default","system_fingerprint":"fp_03...","choices":[],"usage":{"prompt_tokens":8,"completion_tokens":9,"total_tokens":17,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}}}

data: [DONE]

Hard to say if this is what OP is experiencing here but a way to check is to

  • Use a HTTP request to the Groq endpoint and select model
  • In the body, be sure to include { "stream": true }
  • Check the response. If it doesn’t look like the above example then that might well be the issue.
1 Like

I also encountered the same Issue Today


is there any resolution yet?

n8n version - 1.93.0 (Cloud)

Okay… I was using Gemini Flash 2.5 model and it seems to be an issue with that only. I switched to Gemini Flash 2.0 version and it worked completely fine.

Seems some LLM versions are not producing response as text/plain and hence the issue

Same issue today (and I am using the Groq Llama3 model). I believe this is an intermittent issue. Just curious but can this be prioritized?

I noticed the same thing too, I saw that this version here worked on mine: models/gemini-2.5-flash-preview-04-17
the new ones that came out started giving this error

same for me

It was probably a mistake on Google’s part, but now the models here are back to normal.

Bringing this post back again. I’m also having this issue and suspect also about the output of a tool which is a JSON object, and then I need to pass this result into another tool. Also using the model gemini-2.5-flash-preview-04-17 and tried also gemini-2.5-flash-preview-05-20.

I think something that complicate things more, is that this models don´t let you debug correclty. They just show the error “reading content” without showing where is ocurrying.

Same issue here.. it does happens intermittently. I am also using Gemini Flash 2.5.

If someone find a fix, please tell us :sweat_smile: