Add Custom Parameters Support to OpenRouter Model Node (and other AI nodes)

The idea is:

Add custom parameters functionality to the OpenRouter model node (and potentially other AI model nodes), allowing users to manually add and configure API-supported parameters beyond the limited set currently available in the node interface.

My use case:

I need to use advanced features of the OpenRouter API, particularly the reasoning tokens parameters (such as reasoning.effort, reasoning.max_tokens, reasoning.exclude, etc.), which are well-documented in the OpenRouter official documentation (Reasoning Tokens | Enhanced AI Model Reasoning with OpenRouter | OpenRouter | Documentation) but cannot be configured in n8n’s OpenRouter node. Currently, I can only access these features through the HTTP Request node with direct API calls, but this means losing all the convenient features of n8n’s AI nodes, such as message management, streaming responses, error handling, etc.

I think it would be beneficial to add this because:

  1. API Feature Completeness: OpenRouter API supports many advanced parameters, but n8n nodes only expose a small subset, preventing users from fully utilizing the API’s capabilities
  2. Rapid Adaptation to New Features: AI models and API providers frequently release new features; waiting for n8n to add them one by one creates functionality lag
  3. Multi-Channel Compatibility: Different AI service providers have different parameter specifications, and custom parameter functionality could better accommodate this diversity
  4. User Experience: Users shouldn’t have to choose between “using feature-complete HTTP Request” and “using convenient but functionally limited dedicated nodes”
  5. Community Contribution: Advanced users could share their custom parameter configurations, helping other users

Any resources to support this?

  • OpenRouter official documentation on reasoning tokens: Reasoning Tokens | Enhanced AI Model Reasoning with OpenRouter | OpenRouter | Documentation
  • Related community discussions:
    • “Enable Reasoning Parameters Across All Compatible Models in AI Agent Node” (40 votes)
    • “Missing Reasoning Effort Parameters in Most Compatible Models”
    • “Use OpenRouter Models Parameter” (discussing HTTP Request as workaround)
  • Examples of models with partial reasoning parameter support:
    • Anthropic Claude 4 Sonnet → Enable thinking > Thinking Budget (tokens)
    • OpenAI GPT-0.3 → Reasoning Effort: low / medium / high
    • OpenAI GPT-4 mini → Reasoning Effort: low / medium / high

Are you willing to work on this?

While I don’t have experience developing n8n nodes, I’m willing to:

  1. Test any related feature implementations
  2. Provide detailed use cases and testing scenarios
  3. Help with documentation
  4. Communicate requirement details with the development team

Suggested Implementation:
Add an “Advanced Parameters” or “Custom Parameters” section to the OpenRouter node, allowing users to add custom parameters in JSON format or as key-value pairs. These parameters would be passed directly to the OpenRouter API call.

This design would maintain the node’s ease of use while providing the flexibility that advanced users need.

Hi @Zhenyi-Wang, thank you for submitting this feature request.

It seems to go into a similar direction as this one: Custom parameters for LLM chat models

What would you think would be the most sustainable approach to enable this in the UI?

Would something like a “define LLM options as JSON” as alternative to the predefined options from the dropdown be what you’re looking for?

EDIT: or should there rather be a button to “Add custom option”, where you then specify a “key” and “value” to be sent along to the llm, together with the options you select from the predefined list.

@Konsti Thanks for your prompt reply, really really appreciated! :smiley:

Honestly, I think the “define LLM options as JSON” (or I would call it like “Custom JSON“ or so) approach is the cleanest. Just add one field below the current options where users can input raw JSON (like {"reasoning": {"effort": "high"}} or use expression). The node merges it with predefined params before sending—simple, handles nested stuff perfectly, and keeps the UI tidy. No extra buttons or clutter.

2 Likes

Thanks for your feedback. Will be taking this back to the product teams this week to try to get it prioritized.

Thank you for escalating this to the product team. This would be a valuable improvement for accessing advanced API features like nested parameters. Appreciate your support! :folded_hands:

1 Like

Following up:

We’ve started implementing a solution as part of our internal hackathon, but have hit some challenges with the langchain APIs, which results in this taking a bit longer than expected.

Here’s the open pull request: feat(Openrouter Node): Support setting chat model options as JSON by konstantintieber · Pull Request #21780 · n8n-io/n8n · GitHub

I just added a few more commits on that PR. Would you be up to test it locally and report back if setting the JSON options in the OpenRouter node is working as expected?

Wow that’s great! Does it mean I need to clone that branch and run locally? Sorry for the basis question, I am new to n8n… If there is some docs it will be of great help.