Does OpenAI Chat Model node has custom options?

Describe the problem question

I use OpenAI Chat Model node, I want to use qwen3-4b llm, it has an option :

extra_body={“enable_thinking”: True}

how can I add this ‘extra_body‘ option ?

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.116.2
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

If you need more control over API parameters, use the HTTP Request node to directly call the OpenAI-compatible API, which would allow you to include custom parameters in the request body.

but “HTTP Request node“ can’t connect “AI Agent“.

if “AI Agent“ can connect “HTTP Request nod“ , it’ll be very good

You can t in the way you did.

You can use the HTTP Request node either as a standalone node or as a tool within your agent workflow to make custom API calls with parameters not supported by the built-in AI nodes.

Headers:

Content-Type: application/json

Authorization: Bearer YOUR_API_KEY(recommend use heade r Auth instead).

body type json

‘ {

“model”: “qwen3-4b”,

“messages”: [

{

  "role": "user",

  "content": "Your prompt here"

}

],

“extra_body”: {

"enable_thinking": true

}

}’

I have only qwen3 api and api key, it’s an OpenAI-compatible API,

but there is no qwen3 chat model,

I can only use “OpenAI Chat Model“ node,

but I want add an option extra_body={“enable_thinking”: True}

how should I do ?

Unfortunately, there’s no option to add this directly..
Hopefully in the future there will be an additional option to include custom configurations..

However, if you’re familiar with LangChain, you can add a LangChain Code node and replicate the AI Agent node inputs/outputs and include enable_thinking but this approach requires coding and some LangChain knowledge and it’s not very flexible in the end..