Custom parameters for LLM chat models

The idea is:

We could be setting any parameters for chat models, for example all types of reasoning efforts, new verbosity parameters for GPT5, etc.

If it’s difficult to keep up with all possible parameters for each models, then allowing to pass a custom JSON or custom parameters would be solving the issue as builders would configure this as they like

My use case:

I’d like to set GPT5 reasoning effort to minimal, change the verbosity, or access all possible parameters for providers like Openrouter (e.g. picking a certain provider for the selected model)

I think it would be beneficial to add this because:

We’d have more customization options for our AI workflows

Any resources to support this?

Are you willing to work on this?

Hi @Guillaume_Duvernay, I like the idea.

We’ve started implementing a solution as part of our internal hackathon, but have hit some challenges with the langchain APIs, which results in this taking a bit longer than expected.

Here’s the open pull request: feat(Openrouter Node): Support setting chat model options as JSON by konstantintieber · Pull Request #21780 · n8n-io/n8n · GitHub

@Guillaume_Duvernay I just added a few more commits on that PR. Would you be up to test it locally and report back if setting the JSON options in the OpenRouter node is working as expected?