Missing Reasoning Effort Parameters in Most Compatible Models

I’ve been waiting for over 8 weeks hoping this issue would be addressed (or at least partially implemented), but so far, no fix or update has been made — perhaps because this limitation hasn’t yet been mapped internally.

When using AI Agents with the newer models that support native reasoning capabilities, it becomes extremely frustrating to work with the n8n AI Agent node. That’s because, in 99% of the compatible models, the reasoning functionality is not exposed through the interface — making it impossible to:

• Enable the model’s reasoning mode,

• Select the reasoning effort level (reasoning_effort = low / medium / high, when available),

• Or define how many tokens the model is allowed to use specifically for reasoning.

This is really limiting. In order to leverage these reasoning features, we are forced to use custom HTTP requests to the provider or to OpenRouter, which means we lose access to all the additional features and convenience provided by the AI Agent node.

It’s important to highlight that some models already partially support this in the n8n node:

• Anthropic Claude 4 Sonnet → Enable thinking > Thinking Budget (tokens)

• OpenAI GPT-0.3 → Reasoning Effort: low / medium / high

• OpenAI GPT-4 mini → Reasoning Effort: low / medium / high

However, there are many other models that support reasoning capabilities, but which currently have no option in the AI Agent node to enable reasoning or configure its parameters. Examples include:

• Gemini 2.5 Flash

• Gemini 2.5 Flash Lite

• Gemini 2.5 Pro

• Grok 3 Mini

• Grok 4

• OpenAI GPT-OSS-120B

• GPT-5

• GPT-5 Mini

• GLM 4.5 (via OpenRouter)

• Qwen3 235B A22B Thinking 2507 (via OpenRouter)

• Deepseek R1 0528 (via OpenRouter)

• Perplexity: Sonar Reasoning Pro

…and many others.

Please consider urgently expanding the reasoning configuration options to all compatible models.

Important: This request is specifically about the native reasoning functionality of the models themselves, which is entirely different from the separate “Think Tool” node in n8n.

11 Likes

Up on this. Similar issue.

2 Likes

Is there any planned implementation time?

1 Like

I agree — this is a serious limitation.

reasoning.effort is critical for GPT-5, Mini, and Nano, as it directly affects latency and cost. Without it, simple tasks run 3–4× slower. This setting should be available for all reasoning-capable models.

3 Likes

Reasoning isn’t optional anymore. Across GPT-5 and every model that supports it, performance swings massively based on whether you enable it and at what reasoning effort level.

You can see this with GPT-OSS 120B, Gemini 2.5 Flash, GLM-4.5, Grok Mini 3, and others. The frustrating part is that n8n still doesn’t expose these controls—if the n8n devs were actually building AI apps with n8n, they’d feel the urgency.

And this isn’t new: reasoning has been in multiple models for almost a year, yet there’s still no way in n8n to enable it, set the reasoning effort level, and allocate a reasoning token budget.

2 Likes

Agree and hope they will add it to LLM notes including openrouter ASAP

1 Like

I found there already is a PR for this to be implemented: feat(openai): add reasoning_effort option to chat completions by agniiva · Pull Request #13056 · n8n-io/n8n · GitHub (GHC-731 internally)
However, it seems it is a pretty old PR and the n8n team isn’t really looking at this one in paticular. Let’s hope they implement it ASAP :crossed_fingers:

4 Likes

Try this :slight_smile:

2 Likes

what kind of node or model is this? i dont see how to add the option “Reasoning Effort” (i’m using azure openai)

Hi Grzegorz,
Just to clarify, the “Reasoning Effort” mode is currently available for GPT-5 because it was manually added by the devs in the recent [email protected] update.
However, all other compatible models that are arguably more important are still being ignored. This feature really should be enabled for all compatible models, not just GPT-5.

Hi Aethera,

As you can see in the screenshot, you can specify the model By Id and enter the model as an expression. Then, in the options, you can select Reasoning Effort.

Hey Grzegorz, unfortunately, this “hack” doesn’t work universally, because each model has its own configuration for reasoning. For example, “reasoning effort” is a parameter used by OpenAI models and doesn’t apply to Google’s models, even if you try to use an OpenRouter credential and add a model like 2.5 Pro. You would still need to configure the number of tokens dedicated to reasoning, and Google’s settings are different. The same goes for Deepseek, Qwen, and others—they all have different ways to handle this.

Unfortunately, this hack isn’t a reliable solution for every situation.

has someone a http module workaround for openrouter?

1 Like

it is a urgent problem, we are waiting for the reasoning option for all models, especially widely used openai and grok models,

2 Likes

we still need this for other ai nodes (not just the openai node)

We need this too on the AWS Bedrock models. Multiple Bedrock models support Reasoning and have since the Claude 3.7 release but reasoning is not supported in the options. This is the biggest feature gap in n8n IMO.

Hi team, is there any workaround? Thanks.