Use GLM in Ai Agent node N8N - have error

Hello!

I am try use GLM in Ai Agent node, but have error The resource you are requesting could not be found

Use instruction from GLM n8n Workflow - Overview - Z.AI DEVELOPER DOCUMENT

What am I doing wrong?

Hi @Scorpic

The error “The resource you are requesting could not be found” usually points to an endpoint/model/resource issue (wrong URL, non-existent model, wrong region, etc.) — we’ve seen this happen with Vertex and Azure before.

The n8n docs don’t mention GLM/Z.AI specifically, so I can’t say for sure if their API is fully compatible with what the “OpenAI Chat Model” node expects.

If GLM’s API is truly OpenAI-compatible, you could try:

Create OpenAI credentials and change the Base URL to the GLM endpoint. Disable “Use Responses API” if the provider only supports /v1/chat/completions. Double-check that the model name matches exactly what’s in GLM’s documentation.

If that still doesn’t work:

The officially recommended approach is to use the HTTP Request node with GLM’s API, following the general pattern for “when the operation isn’t supported by the node.”

Hope that helps!

1 Like

Hi @Scorpic
turn off these options:

1 Like

Thanks!

I am use official instruction GLM for N8N - n8n Workflow - Overview - Z.AI DEVELOPER DOCUMENT

Credential is correct and not have block firewall

After turn off Use Responses API, get Timeout error

Still have problems use it.

Thanks to All!

Problem in ban by country IP. Use proxy and all ok

2 Likes