HTTP Request → OpenAI Chat Completions returns Bad request – please check your parameters despite valid messages array

Hi all :waving_hand:

I’m stuck on an HTTP Request node calling the OpenAI Chat Completions API, and after a lot of debugging I’m not sure what I’m missing. Hoping someone can sanity-check this.

What I’m trying to do

I’m building a workflow that:

  • Pulls SERP data (DataForSEO)
  • Formats it into a valid OpenAI messages array
  • Sends it to OpenAI via an HTTP Request node (/v1/chat/completions)
  • Parses the response downstream

Current behaviour

The OpenAI node fails with:

Bad request – please check your parameters
invalid model ID

This happens even though:

messages is a valid array
The payload is valid JSON
The model is set to gpt-4.1

Workflow details

n8n version: n8n Cloud v2.1.4
HTTP Request node: Method: POST

URL: https://api.openai.com/v1/chat/completions

Authentication: Header Auth (OpenAI API key)

Body Content Type: JSON

Specify Body: Using fields below

Body parameters

model: gpt-4.1 (string, not expression)

temperature: 0.2

messages: ={{ $json.messages }} (Expression mode)

Upstream payload (confirmed output)

The previous node (“Format AI Payload”) outputs:

{
“model”: “gpt-4.1”,
“temperature”: 0.2,
“messages”: [
{
“role”: “system”,
“content”: “You are an SEO content analyst. Return only JSON.”
},
{
“role”: “user”,
“content”: “{ “h1”: “football boots”, “url”: “https://www.abc.com”, “results”: […] }”
}
]
}

In the Expression editor, the messages field clearly evaluates to:

[Array:
{ role: “system”, content: “…” },
{ role: “user”, content: “…” }
]

So it is not being stringified as [object Object].

What I’ve already tried

Different models:

gpt-4.1

gpt-4.1-mini

Hard-coding the model value (not using expressions)

Verifying messages is an array via the Expression editor

Ensuring messages.content is always a string

Rebuilding the HTTP Request node from scratch

Running the same payload successfully via curl outside n8n

Still get the same error in n8n.

Hey @hemanthbalaji Welcome to the n8n community !

This almost certainly isn’t a problem with your messages array or JSON. Everything you’re describing points to OpenAI rejecting the model, not n8n sending a bad payload.

n8n is basically just passing the request through. When you get an “invalid model ID”, that’s OpenAI saying “this model isn’t available for this endpoint or this API key”.

A couple of simple ways to confirm that fast:

First, swap the model to a known-good one,
Keep everything exactly the same and just change:

model: gpt-4o

If that works, it proves that your JSON is fine, your messages structure is fine, Auth and headers are fine and the issue is specifically gpt-4.1, not your request.

Second, try the built-in OpenAI node.
Add an OpenAI node, pick a chat/text operation, and look at the model dropdown. If gpt-4.1 isn’t even listed there, that’s a strong signal your account doesn’t have access to it in that context yet.

One more important detail: endpoint matters.
OpenAI now splits things between Chat Completions and the newer Responses API. Some models only work on one of them. If gpt-4.1 is only enabled for Responses on your account, calling it via /v1/chat/completions will fail with “invalid model ID” even though the model technically exists.

I would say that you should stick to the legacy model gpt-4o