Enable Reasoning Parameters Across All Compatible Models in AI Agent Node

I don’t understand the logic behind disabling Ollama’s default thinking feature.

Exactly my point, I had several workflows which uses the AI Agent Node and Ollama Chat models for complex tasks, Qwen3 8b, 20k ctx, with a good system prompt was able to do a lot of things for me using multiple tool calls consistently, I could see the reasoning from intermediate steps in the logs, that made me refine my prompts a lot, but now after some upgrade (don’t know which version, either ollama or n8n, I upgraded both of them recently), i don’t see those <think>..</think> tags anymore in the logs, it just straight away answers which makes the model significantly dumber.

Didn’t even notice this until my agents were outputing garbage results.

There should be a way to enable these things from the model options, where we already have a lot of things.

Ok, I dug a little more into this… and apparently under the hood the models are still reasoning, but now the Agent’s outputs are “more simplified”? so it basically omits the reasoning output, I still feel there should be an option to disable this “simplified output” mode (it is there in the normal “Message a Model node - Ollama“) as it’s a lot easier to debug and refine the system prompt, even the token count is omitted and it only shows the actual message token count and not the reasoning token count, which again makes adjusting the CTX a nightmare. Still figuring out why my agents become dumber even though the model is technically reasoning.

vote up, I need that option too.