OpenAI (ChatGPT) node returns generic in n8n, but same prompt works perfectly in Normal ChatGPT

Hi everyone,

I’m building a cold-calling research workflow in n8n and I’m running into a confusing issue with the OpenAI (ChatGPT) node that I can’t seem to resolve.

Workflow (very simple)

  • Manual Trigger

  • Google Sheets (1 row at a time)

  • OpenAI Chat Model node

No extra parsing or transformations.


The problem

I’m using a prompt that works perfectly when I run it directly in the ChatGPT UI (chat.openai.com), but when I use the exact same prompt inside the n8n OpenAI node, the output becomes very generic and full of "N/A" values.


Example

Prompt intent

Research a real business and return structured JSON with:

  • Design project / business name

  • Focus area

  • Social media platform, activity level, and followers


Result in ChatGPT UI (correct, detailed, fact-checked)

{
  "Design Project": "7 Plates Cafe - Chicago, IL",
  "Focus Area": "Hospitality & Commercial Interior Design",
  "Social Media Presence": {
    "Platform": "Instagram",
    "Activity Level": "High",
    "Followers": "Approx. 6.9K+"
  }
}

This output is accurate and matches real-world data.


Result in n8n OpenAI node (same prompt)

{
  "Design Project": "N/A",
  "Focus Area": "Interior Design",
  "Social Media Presence": {
    "Platform": "Twitter",
    "Activity Level": "Low",
    "Followers": "N/A"
  }
}

This happens consistently across different businesses.


What I’ve already checked

  • Tried different models (GPT-4 / GPT-4o)

  • Adjusted temperature

  • Tested JSON vs text output

  • Confirmed prompt content is identical

  • No output parsers or extra nodes involved


Question

Is there a known difference in:

  • Inference behavior

  • Entity resolution

  • Or safety defaults

between the ChatGPT UI and the OpenAI API used by n8n, that would cause the model to return conservative "N/A" placeholders unless explicitly instructed otherwise?

If so, is there a recommended way in n8n to:

  • Enable more confident inference, or

  • Match ChatGPT UI behavior more closely for research-style prompts?

Any guidance or examples would be greatly appreciated and i cloud really help me out in this siatuation thank you so much🙏

Hi @Oliver_Beier!
From what I’ve seen, this isn’t an n8n issue or a model regression. The ChatGPT UI has built-in browsing and context, while the OpenAI API used by n8n does not. In the API, the model becomes conservative and returns N/A when it lacks explicit data. To get similar results, you need to fetch real business and social data before the OpenAI node and use the model only to structure or interpret that input, not to research it from scratch.