Enable Reasoning Parameters Across All Compatible Models in AI Agent Node

I’ve been waiting for over 8 weeks hoping this issue would be addressed (or at least partially implemented), but so far, no fix or update has been made — perhaps because this limitation hasn’t yet been mapped internally.

When using AI Agents with the newer models that support native reasoning capabilities, it becomes extremely frustrating to work with the n8n AI Agent node. That’s because, in 99% of the compatible models, the reasoning functionality is not exposed through the interface — making it impossible to:

• Enable the model’s reasoning mode,

• Select the reasoning effort level (reasoning_effort = low / medium / high, when available),

• Or define how many tokens the model is allowed to use specifically for reasoning.

This is really limiting. In order to leverage these reasoning features, we are forced to use custom HTTP requests to the provider or to OpenRouter, which means we lose access to all the additional features and convenience provided by the AI Agent node.

It’s important to highlight that some models already partially support this in the n8n node:

• Anthropic Claude 4 Sonnet → Enable thinking > Thinking Budget (tokens)

• OpenAI GPT-0.3 → Reasoning Effort: low / medium / high

• OpenAI GPT-4 mini → Reasoning Effort: low / medium / high

However, there are many other models that support reasoning capabilities, but which currently have no option in the AI Agent node to enable reasoning or configure its parameters. Examples include:

• Gemini 2.5 Flash

• Gemini 2.5 Flash Lite

• Gemini 2.5 Pro

• Grok 3 Mini

• Grok 4

• OpenAI GPT-OSS-120B

• GPT-5

• GPT-5 Mini

• GLM 4.5 (via OpenRouter)

• Qwen3 235B A22B Thinking 2507 (via OpenRouter)

• Deepseek R1 0528 (via OpenRouter)

• Perplexity: Sonar Reasoning Pro

…and many others.

Please consider urgently expanding the reasoning configuration options to all compatible models.

Important: This request is specifically about the native reasoning functionality of the models themselves, which is entirely different from the separate “Think Tool” node in n8n.

Me gusta la idea es necesario en algunos casos donde se necesite la máxima potencia de un modelo , o en otras simplemente es necesario no analizar tanto el contexto de una interacción

4 Likes

I’m experiencing the exact same issue and it’s been a major blocker for me as well. I rely heavily on models with native reasoning capabilities, and not being able to enable or configure reasoning directly within the n8n AI Agent node makes the workflow unnecessarily complicated.

Like you, I’ve had to resort to custom HTTP calls just to unlock these features — which means losing all the built-in benefits of the AI Agent node. I’m also missing this reasoning configuration option in the OpenAI node itself, which makes it even harder to take advantage of these capabilities without workarounds.

It would be a huge improvement if the reasoning options (enable, effort level, token budget, etc.) were available for all supported models, not just a select few. Really hoping this gets addressed soon, as it would significantly improve how we can use reasoning-enabled models in n8n.

3 Likes

GPT-5 is basically unusable in a chat szenario without this implemented. I would also suggest to have an option to add custom parameters to the request to have users be able to workaround such issues in the future without waiting on n8n catching up to api specs.

6 Likes

Reasoning is no longer optional. On GPT-5 and any model that supports it, performance shifts dramatically based on whether it’s enabled and the chosen reasoning effort level. You can see this with GPT-OSS 120B, Gemini 2.5 pro, Gemini 2.5 Flash, GLM-4.5, Grok Mini 3, Deepseek and others.

That’s why it’s frustrating that n8n still doesn’t expose these controls. If more n8n devs were actually building AI apps inside n8n, the need would be obvious: let us enable/disable reasoning, set the reasoning effort level, and allocate a token budget.

This has been available across multiple models for nearly a year.

7 Likes

Yeah this was needed yesterday. It’s not even mildly difficult to implement either.

4 Likes

Same here. I am using openrouter to connect all LLM models and enabling reasoning by simple tap here is really essential for me

3 Likes

We wrote the Python code manually, but it’s not very convenient.

3 Likes

Would be great to have it native on the node—using the n8n AI Agent’s features just works way better than coding things manually.

4 Likes

I’m really frustrated that I can’t use this feature natively in the agent nodes—it’s like having a Ferrari without any fuel.

3 Likes

adding my vote to this and requesting the team to perhaps drop an eta for this feature

Can we just have the ability to modify any custom argument? I’m sure things like this will change in the future, and having the ability to set custom args will future-proof this issue.

vote up, I need that option too

Has someone a https node workaround to post here?

created an account to vote for this feature, it is crucial…

adding a vote to this too

Up. I have the same problem.

Is there really no way to turn on reasoning or specify the reasoning tokens?

Reasoning is widespread enough that it should definitely be baked in - but it would seem the simplest future proof option would be to allow custom params.

1 Like

I don’t understand the logic behind disabling Ollama’s default thinking feature. The Ollama API already includes options like on/off, low, and high so why did you choose not to use this functionality and instead disable it, effectively lowering the model’s intelligence level?

1 Like

Definately a need for modern models.

Hi, as part of our internal one-day hackathon, we tried to implement reasoning support at least for the OpenRouter model node.

Unfortunately we didn’t manage time to get it working yet.

Here’s the open GitHub PR that adds the reasoning effort option for OpenRouter nodes: feat(Openrouter Node): Support reasoning effort in openrouter node by konstantintieber · Pull Request #21779 · n8n-io/n8n · GitHub

At the same time, we tried adding support for defining model options as JSON. This PR is also not ready yet: feat(Openrouter Node): Support setting chat model options as JSON by konstantintieber · Pull Request #21780 · n8n-io/n8n · GitHub

2 Likes