LLM peforming differently through web interface vs. API/Http Request

Describe the problem/error/question

Is it possible that the output of the LLM’s (Claude or Perplexity) web interface (perplexity) or fat client (Claude on Mac) vary greatly from when I run the exact same prompt via API call through an HTTP request node?

I can see the output is not the same. The output from the manual prompt outperforms the API call regardless of the model I use. I have paid version of both Claude and Perplexity along with Anthropic credits.

Anyone else see this? Any ideas how to get API to perform better?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Running n8n via (n8n cloud):

Hi,
The web UI can add hidden context, system prompts, or built-in default settings that help guide the LLM to produce better responses.

Meanwhile, when you call the model via API (like through an HTTP request node), you’re just hitting the raw, packaged LLM to process your request. It doesn’t come with any system prompt, context, or built-in guidance like the web interface does.

I suspected it was something like that. Thank you.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.