I noticed a significant behavior change in the AI Agent node after upgrading n8n from v1.122.5 to v1.123.0.
The issue is that the AI Agent now includes its internal “thinking process” or reasoning in the final output. This is particularly evident with local models like qwen3:14b or gpt-oss:20b running via Ollama, where the output reveals the entire reasoning chain before the actual response. In the previous version (v1.122.5), the output was clean and only contained the final response to the user.
My Question
Is this a change in the underlying LangChain prompt templates in v1.123.0? How can I configure the node in the new version to suppress this “thinking process” and strictly output the final response?
What is the error message (if any)?
There is no system error message, but the output content is incorrect/unexpected.
Example 1: Model qwen3:14b (Severe Case) The model outputs a long internal monologue checking guidelines before greeting the user.
“Okay, the user said “hi”. I need to respond appropriately. Let me check the guidelines. The response should be friendly and open-ended. Maybe ask how I can assist them. Keep it simple and welcoming. Avoid any technical jargon. Make sure to use proper grammar and a warm tone. Alright, that should work. Hello! How can I assist you today? ”
Agent Response: [Used tools: Tool: set_language_to_spanish, Input: {}, Result: [{“user_id”:5747,“language_selected”:“Spanish - Chile”,“updated_at”:“2025-12-05T20:44:35.200Z”}]] Listo, ahora hablamos en español. ¿En qué más te puedo ayudar?
expected response: Listo, ahora hablamos en español. ¿En qué más te puedo ayudar?
same problem here! in time to time, the AI Agent Node, send this response, with a json with the tool usage or thinking usage, i got this problem when i updated the AI Agent Node to version 3, im going to get back to AI Agent node version 2.2 to see if the error is in the Node version or N8N version.
im using gpt-4.1-mini as chat model but i think its not a problem with the model.
And I tried enable the return intermediate steps then disable it in n8n version 1.122.5, my ollama model (qwen3:14b/gpt-oss:20b) works fine.
I think my problem only happend when using thinking model, while the instruct version model like qwen3:30b-instruct works fine without “Thinking Process” output
I’ve deployed a new version of n8n — 2.3.0 — but the issues with gpt-oss:20b persist.
On the old setup (AI Agent node version 2.2 and n8n version 1.108.2), everything works correctly.
Guys, I am sorry, but do anyone have some guide to how to fix this issue, even how to update correctly @langchain/ollama to 1.0.3?
Maybe it is better to wait official update of n8n?
I am on n8n version 2.2.4. Using local gpt-oss and facing the issue that people mentioned.