Azure OpenAI Chat Model keeps returning "InvalidURL"

Describe the problem/error/question

I have been trying to get Azure OpenAI Chat Model working as part of my flow but every single permutation returns “Invalid URL” whenever trying to connect.

I suspect it’s a bug specific to the Azure OpenAI chat model plugin - I’m pretty sure I’ve tried everything and would like to be proven wrong :slight_smile:

Tests:

  • I successfully connected to my Azure OpenAI model via the n8n HTTP Request - same configurations
  • I successfully connected to my Azure OpenAI model via a terminal curl test - same configurations
  • I successfully configured and connected with the Google Gemini API chat model without any problem.


What is the error message (if any)?

{
“errorMessage”: “Invalid URL”,
“errorDetails”: {
“rawErrorMessage”: [
“Invalid URL”
],
“httpCode”: “ERR_INVALID_URL”
},
“n8nDetails”: {
“nodeName”: “Azure OpenAI Chat Model”,
“nodeType”: “@n8n/n8n-nodes-langchain.lmChatAzureOpenAi”,
“nodeVersion”: 1,
“itemIndex”: 0,
“runIndex”: 0,
“time”: “3/3/2025, 3:28:19 PM”,
“n8nVersion”: “1.80.5 (Self Hosted)”,
“binaryDataMode”: “default”,
“stackTrace”: [
“NodeApiError: Invalid URL”,
" at ExecuteSingleContext.httpRequestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/node-execution-context/utils/request-helper-functions.js:946:15)“,
" at processTicksAndRejections (node:internal/process/task_queues:95:5)”,
" at ExecuteSingleContext.httpRequestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/node-execution-context/utils/request-helper-functions.js:1143:20)“,
" at RoutingNode.rawRoutingRequest (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/routing-node.js:319:29)”,
" at RoutingNode.makeRequest (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/routing-node.js:406:28)“,
" at async Promise.allSettled (index 0)”,
" at RoutingNode.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/routing-node.js:139:35)“,
" at ExecuteContext.versionedNodeType.execute (/usr/local/lib/node_modules/n8n/dist/node-types.js:55:30)”,
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:627:19)“,
" at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:878:51”
]
}
}

Please share your workflow

{
“nodes”: [
{
“parameters”: {
“model”: “gpt-4o”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatAzureOpenAi”,
“typeVersion”: 1,
“position”: [
440,
20
],
“id”: “c3048bc5-19c8-428a-964c-1a9e0a0dc9dd”,
“name”: “Azure OpenAI Chat Model”,
“credentials”: {
“azureOpenAiApi”: {
“id”: “D7x1yIbh9UYuoOUN”,
“name”: “Azure Open AI account”
}
}
}
],
“connections”: {
“Azure OpenAI Chat Model”: {
“ai_languageModel”: [

]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “b7d6fb9c5db9476d85e6f5c4fe0a0dfe8ed7c9319469d71a9c0b6be722f7293d”
}
}

Share the output returned by the last node

Information on your n8n setup

Latest n8n installed this weekend on Synology NAS (everything else is working within the n8n environment. Just not this model.

2 Likes

Same issue. I have no solution but Urls work in other apps (like librechat and http) but not here. Hoping a solution can be found. I’ve just started with N8N so I can’t give much context of whether it worked prior.

Actually I got it working. This isn’t how I would assume its intended, but the “Resource Name” isn’t necessary since its part of the endpoint. You can’t leave it blank, so i added an empty space. and it now works

Thanks so much for responding. I tried your solution with the space but unfortunately it’s still not working for me. Looking at the screenshot - my config is the same as yours, so still a mystery.

What did you put as your model name in the flow widget?

change the API Version to 2024-05-01-preview and make sure your ‘gpt-4o’ is deployed

I had gpt-4o-mini for mine
currently what is deployed in ai foundry for my resource

Thanks for the followup, I tried changing the API version to the one you suggested and a bunch of other ones… tried the resource with and without space and pretty much all permutations and always get the same error - pretty sure there is some type of obscure bug or situation happening and unfortunately the error messages are not helpful.

Hopefully someone else will chime in with an idea.

Finally, the API started connecting with the latest release. not sure exactly what happened but at least things are running now.

1 Like

I have the same issue. It looks like it works for OpenAI models but not for other deployments in Azure AI Foundry. Connecting from custom JS code works fine, but in n8n, I get the error: “Unknown model: gpt-3.5-turbo.” Maybe the issue is that Azure AI Foundry is not yet implemented, and only Azure OpenAI Services are supported?

For example, I tried with mistral-nemo → got an error. Then I tried the same configuration with gpt-4o-mini → and it worked.

n8n Version 1.81.4 (Self Hosted)


tried with and without resource name.


Screenshot from Azure AI Foundry.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.