HTTP Request Vs. Tool Agent for Perplexity + ChatGPT

Describe the problem/error/question

I want to use Perplexity to run some research, then use ChatGPT to summarize and write content based on the research. It seems that there are two ways to do so, and I’d like to understand which would work better in terms of delivery quality and efficiency (latency & token usage):

  1. HTTP request node to run Perplexity, then extract the content to OpenAI model for processing
  2. Tool agent node using ChatGPT as the agent, with tool access to Perplexity search.

Thank you very much!

Information on your n8n setup

  • **n8n version:**1.88.0
  • **Database (default: SQLite):**default
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):**default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: macOS

Hi @xponential Welcome to n8n :n8n: community :tada:

There are several factors that depend on latency and token usage,
Broadly speaking, if we split workflows into two categories, they are:

  • AI Workflow: Low latency and low token usage, because the process is deterministic.
  • AI Agent: Higher latency and higher token usage, because the process is nondeterministic.

So in your case, you should try both approaches, measure which one consumes more tokens or responds faster, and then decide accordingly.

I just watched a video yesterday that explains this distinction very clearly,
highly recommend giving it a watch!
:link: AI Agents vs AI Workflows


If this answers your question, please mark the reply as the solution āœ…šŸ™šŸ»
2 Likes