After the update to N8N 1.123.5 and ollama 0.13.2, I start to get the model thinking in the output. I can’t find a reliable way of stripping the model thinking. A tcpdump shows that ollama is sending thinking and output is different JSON tags, but as far as I can tell, N8N are combining them without any separator.
What is the error message (if any)?
Please share your workflow
Chat node, AI agent, ollama chat with gpt-oss:20b
Input
[
{
"sessionId":
"6bfd4816280f4594a47014df63543099",
"action":
"sendMessage",
"chatInput":
"What is the weather today?"
}
]
Share the output returned by the last node
[{“output”: “The user asks: "What is the weather today?" The assistant doesn’t have real-time weather knowledge. We should ask clarifying question: location, maybe use a weather API? There’s no weather tool. We could use external services? There’s no weather function. So we need to ask for location.I’m not able to fetch real‑time data directly. Could you let me know which city or region you’d like the weather for? Once I have that, I can look up the current forecast for you.”}]
Or can be here at line 96. It dependes whether language chain should have an option to hide thinking or if N8N should have an option to pick textor message.content