AI Agent Node outputs internal "Thinking Process" after upgrading to v1.123.0

Describe the problem/error/question

Hi everyone,

I noticed a significant behavior change in the AI Agent node after upgrading n8n from v1.122.5 to v1.123.0.

The issue is that the AI Agent now includes its internal “thinking process” or reasoning in the final output. This is particularly evident with local models like qwen3:14b or gpt-oss:20b running via Ollama, where the output reveals the entire reasoning chain before the actual response. In the previous version (v1.122.5), the output was clean and only contained the final response to the user.

My Question

Is this a change in the underlying LangChain prompt templates in v1.123.0? How can I configure the node in the new version to suppress this “thinking process” and strictly output the final response?

What is the error message (if any)?

There is no system error message, but the output content is incorrect/unexpected.

Example 1: Model qwen3:14b (Severe Case) The model outputs a long internal monologue checking guidelines before greeting the user.

“Okay, the user said “hi”. I need to respond appropriately. Let me check the guidelines. The response should be friendly and open-ended. Maybe ask how I can assist them. Keep it simple and welcoming. Avoid any technical jargon. Make sure to use proper grammar and a warm tone. Alright, that should work. Hello! How can I assist you today? :blush:

Example 2: Model gpt-oss:20b

“The user just says “hi”. We should respond politely. Hello! :waving_hand: How can I help you today?”

Expected Behavior (v1.122.5):

qwen3:14b

"Hello! How can I assist you today? "

gpt-oss:20b

“Hey there! :waving_hand: What’s on your mind today?”

Please share your workflow

Share the output returned by the last node

Please see the comparison screenshots attached below

Information on your n8n setup

  • n8n version: 1.123.0/1.122.5
  • Database (default: SQLite): sqlite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Linux

I’ve faced with same issue

User prompt: switch language to spanish

Agent Response: [Used tools: Tool: set_language_to_spanish, Input: {}, Result: [{“user_id”:5747,“language_selected”:“Spanish - Chile”,“updated_at”:“2025-12-05T20:44:35.200Z”}]] Listo, ahora hablamos en español. ¿En qué más te puedo ayudar?

expected response: Listo, ahora hablamos en español. ¿En qué más te puedo ayudar?

same problem here! in time to time, the AI Agent Node, send this response, with a json with the tool usage or thinking usage, i got this problem when i updated the AI Agent Node to version 3, im going to get back to AI Agent node version 2.2 to see if the error is in the Node version or N8N version.
im using gpt-4.1-mini as chat model but i think its not a problem with the model.

The option: Return Intermediate Steps is disabled, but i enabled it then disabled it. and then this erros start to happend.

  • n8n version: Version: 1.122.5

  • Database (default: SQLite): postgres

  • n8n EXECUTIONS_PROCESS setting (default: own, main): default

  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker

  • Operating system: Linux

what version of AI Agent node are you using?
did you enable the return intermediate steps then disable it?

My AI Agent node is version 3.

And I tried enable the return intermediate steps then disable it in n8n version 1.122.5, my ollama model (qwen3:14b/gpt-oss:20b) works fine.

I think my problem only happend when using thinking model, while the instruct version model like qwen3:30b-instruct works fine without “Thinking Process” output

Same issue here using Ollama Chat Model with GPT OSS 20B

The actual message appears correctly in the Chat Model however the agent shows the thinking and the two cannot be broken apart

On Version 2.0.2

UPDATE

The only work around I could come up with is to force JSON mode and the use a structured output parser

Output parser

Not ideal but it works

Thanks for sharing! It works, but still has problem in my case.

First, json format is not the same. Model gpt-oss output field is “message” not “response”.

Then, it has other output fields, like capabilities, descriptions. Unexpected outputs always make the workflow failed.

I have exactly the same issue, I’ve tried using instructions, but the model (gpt-oss:20b) ignores that specific instruction.

Has anyone found a solution yet?

Change model type, use instruct model, just like qwen3-vl:30b-a3b-instruct or ministral-3:14b. That’s the only way I found work at n8n

After several research I found this:

And his PR in Github:

Updated @langchain/ollama to 1.0.3 to for separation of thinking by johanatandromeda · Pull Request #23098 · n8n-io/n8n

I tried it and it works (update langchain to 1.0.3), now the GPT-OSS and Nemotron don’t include their reasoning in the answer!

1 Like

It work, thanks for your information

1 Like

New version [email protected] got released which includes the GitHub PR 23687.

1 Like

I’ve deployed a new version of n8n — 2.3.0 — but the issues with gpt-oss:20b persist.
On the old setup (AI Agent node version 2.2 and n8n version 1.108.2), everything works correctly.

i need

1 Like

Solution is GitHub PR 23098, please release it

Yes, I got the same problem in 2.3.0.

Try upgrade @langchain/ollama to 1.0.3 and build a new image

Guys, I am sorry, but do anyone have some guide to how to fix this issue, even how to update correctly @langchain/ollama to 1.0.3?
Maybe it is better to wait official update of n8n?

I am on n8n version 2.2.4. Using local gpt-oss and facing the issue that people mentioned.