I don’t understand the logic behind disabling Ollama’s default thinking feature.
Exactly my point, I had several workflows which uses the AI Agent Node and Ollama Chat models for complex tasks, Qwen3 8b, 20k ctx, with a good system prompt was able to do a lot of things for me using multiple tool calls consistently, I could see the reasoning from intermediate steps in the logs, that made me refine my prompts a lot, but now after some upgrade (don’t know which version, either ollama or n8n, I upgraded both of them recently), i don’t see those <think>..</think> tags anymore in the logs, it just straight away answers which makes the model significantly dumber.
Didn’t even notice this until my agents were outputing garbage results.
There should be a way to enable these things from the model options, where we already have a lot of things.