Using Perplexity API with the AI Tools Agent

Describe the problem/error/question

I’m trying to access Perplexity’s API as a tool in the Tools Agent so I can run web search queries. However, I’m encountering difficulties in properly formatting the request body, particularly with the “messages” parameter. Perplexity’s API requires an array of message objects, but I’m struggling to implement this correctly within the n8n interface.

What is the error message (if any)?

{
  "response": "There was an error: \"Request failed with status code 400\""
}

This suggests that the request body is not formatted correctly according to the Perplexity API requirements.

Please share your workflow

Note: this is attached to an AI Tools Agent node with the standard chat input trigger. I am using Groq as the model with llama3-tool-use-preview

Share the output returned by the last node

(shown in the AI agent logs)
Input:

{
  "query": {
    "placeholder": "AI trends for business executives 2024"
  }
}

Output:

{
  "response": "There was an error: \"Request failed with status code 400\""
}

Information on your n8n setup

  • n8n version: Version 1.58.2
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Mac OS

Additional context:

I’ve tried various approaches to format the “messages” parameter, including setting “Value Provided” to “By Model (required)” as well as using a JSON string representation of the array and attempting to use placeholders. However, I’m still unable to get the correct format that the Perplexity API expects. I’m looking for guidance on how to properly structure the request body, especially the “messages” array, within the constraints of the n8n HTTP Request Tool node interface.

Hi J,

Here’s how I got it up and running

Hope it helps!

4 Likes

Perfect, thanks Chrevor! That worked well.

The issue was that I was defining body parameters individually when I should have used a single json input as you did. I think that was what was causing the issues. I kept my value as ‘fixed’ and added {query} as the placeholder, and then defined the query in the definition below so that the LLM knows what gets inserted.

Here’s my version in case anyone needs it in the future:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.