I have my own API for working with LLM (LLama, Qwen), it is OpenAI compatible - there are endpoints that duplicate endpoints from OpenAI, like completions.
When creating an AI agent, I am given a limited number of Chat Models to choose from, and, in theory, I can forge API routes for each of them. But the only question is whether the n8n will let me do that.
Right now, I specified the Chat Model as OpenAI Chat Model, and in the base url address I specified “https://localhost:25187/api/v1”, and I get the error : Error in sub-node ‘OpenAI Chat Model’
request to https://localhost:25187/api/v1/chat/completions failed, reason: connect ECONNREFUSED ::1:25187
I’m running the local server in debug mode, so I can definitely see if the request comes in or not. Now I changed the URL from https to http, but I get the error again:
Error in sub-node ‘OpenAI Chat Model’
request to http://localhost:15187/api/v1/chat/completions failed, reason: connect ECONNREFUSED ::1:15187
However, when I send the request from Thunder Client (VS Code extension to make requests) the request comes to the server.
Okay, thank you. There was indeed a problem when accessing localhost. Instead of customizing it, I decided to keep it simple, and access the domain directly. To test it, I created an HTTP Request node, and sent the request. The request was successful. Next, in OpenAI Model I created an account and as Base URL I specified the same domain as in the HTTP Request node, only I removed /chat/completions from the address, because I know that OpenAI Model automatically substitutes them when sending a request. When running OpenAI Model, I got the following error:
Error in sub-node ‘OpenAI Model’ Connection error.
No further details, unfortunately. What am I doing wrong?