Hi, I read in the forum and didn’t find an answer to this topic.
I work on an offline network and I want to connect a model through an HTTP gateway that I created, and thus create communication with a language model.
Is it possible to do this? Currently I see that I only have existing models and an API is required for them (and again, I’m on an offline network).
Hey @naor117 - If your model adheres to OpenAI API Standard, you could just use the OpenAI Model and change base_url, model and your API key. Let me know if that works for you.
Hi jksr, thank you very much for the quick response.
I don’t have an api_key because I’m working with a model that I uploaded to HTTP and I want to access it.
When I enter a fake api_key like XXXXXXXXXXXXXX and I enter the HTTP URL of my model (which works with HTTP triggers), I get “Couldn’t connect with these settings”
Hi, yes my model sits on a server I built and when I create an http node I am able to contact it and get a response.
I would like to link the model to the LLM model or AI Agent node.
Regarding the api_key, there is no need because it is a model that sits on my server and only I contact it.
Is it possible to link the model to the Ai agent or LLM MODEL node?
Please help me to understand.
Your LLM works by sending an api call and getting response? Why you need an AI node?
Maybe is easy to create a sub-workflow with an input parameter, a set node for prompt (this could be also a parameter) and output webhook to get reply?
Yes, I made an http node, but I suppose there are advantages to making a Chat model node because I suppose the communication is more continuous there than a single http call.
i think communication is the same. Ai node only makes easy the configuration or adding tools, api endpoints, models etc. I’m using custom node for google experimental models that haven’t some customized options for body request and it’s ok