I’m looking a way to connect an http request into the basic llm chain, as chat model, but it does not allow it, the connection between the basic llm chain and the http request is not being linked.
I need to use an http request because I have a local llm that receives an client_id and client_secret and an a local url. The http request works perfectly fine but I cannot linked into the basic llm chain to set a basic chat app and later be able to connect additional information (rag like) to the existing configuration.
Any workaround? If not how I can take as an example, the OpenAI Chat model code and modify it to my needs? it’s the code of the openai node open sourced?
Information on your n8n setup
**n8n version: latest
**Database (default: SQLite): postgree
**n8n EXECUTIONS_PROCESS setting (default: own, main): own
**Running n8n via (Docker, npm, n8n cloud, desktop app): docker
Hi Aya, thanks for answering. The main problem is that I cannot attach an http request to a basic llm chain, i can only attach compatible models. You can see in the screenshot, the red line connection means cannot be linked, green one of course can be.
I need to use an http request as model because i’m running the model through a custom gateway (mulesoft), how to do it without modifying the code? Or how to change the basic llm chain to accept any type of connection?
yes @aya that’s exactly what I need to do. The Basic LLM chain imho should not limit the model type connection, allowing both, model node OR http request. Its possible ?
that’s not quite possible, at least not in an easy way. The model connection expects a BaseChatModel, which does more than call HTTP endpoints; it also handles retry logic, response parsing, messages chunking and parsing logic, etc. Theoretically, you’d be able to implement support for a new model provider by using Langchain Code node, but you would need to re-implement all the methods. You could take a look at Lanchain’s Bedrock Chat Model code as an example of how this could be implemented