Running a flow that pulls companies from a spreadsheet then runs an LLM query. Then updates the row in the spreadsheet. The results from OpenAI GPT 5 seemed really off. I went to check in ChatGPT, and was surprised. If I took the exact text I’m using in n8n and put it into ChatGPT, Gemini or Grok, they all come back with about the same (and correct answer). But going through the n8n openAi API, it is not consistent nor accurate.
Any hints or tips to get the API models to behave more like the user interface?
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Hey @Toby_Rush hope all is well. Welcome to the community.
It is expected that the results of ChatGPT Web UI chat application and the API call to the OpenAI model are different. ChatGPT includes an internal system prompt (instructions that guide tone, style, safety) that is not exposed in the API. Another different is that In ChatGPT, the conversation history is automatically managed, with long term memory and conversational context. ChatGPT sometimes also applies lightweight formatting or safety filters on top of the raw model response. And this is on top of the fact that ChatGPT may automatically route to slightly updated model variants internally to produce better context and therefore output a better answer.
No parameter changing will make model api call produce the same result as chatting with ChatGPT. Even though they use the same model, ChatGPT is a standalone product, which is customized (i’d assume heavily) to produce certain results, which OpenAI considers good for AI chat assistant.
Fair enough. If I want to enrich a dataset starting in spreadsheet, I want to pass a company name and URL in and get back a set of information that is formatted as JSON. Then update the spreadsheet. The model will need to do a fairly open ended search on the companies website, blogs, help docs, etc as well as looking more broadly at news articles.
Can I just use “message a model” with a prompt I craft with some static text and the company name being the dynamic component? Or do I need something far more complex.
I believe that model call may need to be adjusted to allow the model to do web search (call an internal web search tool) (see this page). I believe this can only be done with an API call at the moment.
Been looking more into this. Does anyone know where there are a few examples where there is an input (chat or from file) and you ask an LLM to answer a question but that requires a broad search that is relatively current. The results coming back is a structured JSON.
Trying to figure out if I need to implement OpenAI Tools Web Search or something else. I would have thought this was a more common use case that folks were wanting to use.
Here is a very short example of using openai websearch:
Question: What is the weather in Halifax, Canada today?
Answer: Today (Thursday, August 21, 2025) in Halifax, NS: Mostly sunny right now around 71°F (22°C). Expect it to turn cloudy later with a high near 73°F (23°C) and a low around 55°F (13°C).
As you can see the model had to access online resources to find the answer to both what is today’s date as well as what is the weather.