The idea is:
Add custom parameters functionality to the OpenRouter model node (and potentially other AI model nodes), allowing users to manually add and configure API-supported parameters beyond the limited set currently available in the node interface.
My use case:
I need to use advanced features of the OpenRouter API, particularly the reasoning tokens parameters (such as reasoning.effort, reasoning.max_tokens, reasoning.exclude, etc.), which are well-documented in the OpenRouter official documentation (Reasoning Tokens | Enhanced AI Model Reasoning with OpenRouter | OpenRouter | Documentation) but cannot be configured in n8n’s OpenRouter node. Currently, I can only access these features through the HTTP Request node with direct API calls, but this means losing all the convenient features of n8n’s AI nodes, such as message management, streaming responses, error handling, etc.
I think it would be beneficial to add this because:
- API Feature Completeness: OpenRouter API supports many advanced parameters, but n8n nodes only expose a small subset, preventing users from fully utilizing the API’s capabilities
- Rapid Adaptation to New Features: AI models and API providers frequently release new features; waiting for n8n to add them one by one creates functionality lag
- Multi-Channel Compatibility: Different AI service providers have different parameter specifications, and custom parameter functionality could better accommodate this diversity
- User Experience: Users shouldn’t have to choose between “using feature-complete HTTP Request” and “using convenient but functionally limited dedicated nodes”
- Community Contribution: Advanced users could share their custom parameter configurations, helping other users
Any resources to support this?
- OpenRouter official documentation on reasoning tokens: Reasoning Tokens | Enhanced AI Model Reasoning with OpenRouter | OpenRouter | Documentation
- Related community discussions:
- “Enable Reasoning Parameters Across All Compatible Models in AI Agent Node” (40 votes)
- “Missing Reasoning Effort Parameters in Most Compatible Models”
- “Use OpenRouter Models Parameter” (discussing HTTP Request as workaround)
- Examples of models with partial reasoning parameter support:
- Anthropic Claude 4 Sonnet → Enable thinking > Thinking Budget (tokens)
- OpenAI GPT-0.3 → Reasoning Effort: low / medium / high
- OpenAI GPT-4 mini → Reasoning Effort: low / medium / high
Are you willing to work on this?
While I don’t have experience developing n8n nodes, I’m willing to:
- Test any related feature implementations
- Provide detailed use cases and testing scenarios
- Help with documentation
- Communicate requirement details with the development team
Suggested Implementation:
Add an “Advanced Parameters” or “Custom Parameters” section to the OpenRouter node, allowing users to add custom parameters in JSON format or as key-value pairs. These parameters would be passed directly to the OpenRouter API call.
This design would maintain the node’s ease of use while providing the flexibility that advanced users need.
