Describe the problem/error/question
n8n is the GOAT! Just started yesterday
Q: How can I use the internal tool capabilities of OpenAI’s models, e.g. GPT 4.1?
Note: I’m not talking about connecting a tool to the agent node.
I’m talking about the tool that gets activated when including it in the API request, i.e. the one that gets included here (see the tools property).
const response = await client.responses.create({
model: "gpt-4.1",
tools: [ { type: "web_search_preview" } ],
input: "What was a positive news story from today?",
});
This can be include easily in OpenAI’s API playground as well. Note the Web Search in the menu below:
Here’s n8n’s chat model window. I’ve been looking at all options, but none allows me to include internal tools.
Why does it matter?
For most tasks, OpenAI’s simple web search tool is enough. There’s no need to connect to any third-party APIs and it’s probably also cheaper and less complex.
Plus it’s a lower latency, as it’s internal to the model.
Am I searching in the wrong place, or is this not supported on purpose?
I’d appreciate any infos and hints on how to make this work.
And if it’s not supported on purpose, I’d love to hear the design decision behind this – especially because it’s the OpenAI Chat Model and not a Generic Chat Model.
Background & Community Demand
There’s already a thread by @abierbaum: Any way to use OpenAI Chat Model with default open AI tool: web_search_preview.
Unfortunately, this thread was auto-closed due to inactivity.
It would be great to have an expert/architect opinion on this. OpenAI (and other) models are getting better by the day. It would be a shame not to take full advantage of their native functionalities.
Cheers!
Robert
Information on your n8n setup
- n8n version: 1.100.1
- Database (default: SQLite): SQLite
- n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
- Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
- Operating system: Arch Linux, kernel 6.15.2