How to connect an http request as a chat model?

I’m looking a way to connect an http request into the basic llm chain, as chat model, but it does not allow it, the connection between the basic llm chain and the http request is not being linked.

I need to use an http request because I have a local llm that receives an client_id and client_secret and an a local url. The http request works perfectly fine but I cannot linked into the basic llm chain to set a basic chat app and later be able to connect additional information (rag like) to the existing configuration.

Any workaround? If not how I can take as an example, the OpenAI Chat model code and modify it to my needs? it’s the code of the openai node open sourced?

Information on your n8n setup

  • **n8n version: latest
  • **Database (default: SQLite): postgree
  • **n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • **Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • **Operating system: macos

Welcome to the community @Edu_Arana :tada: !

^ What do you mean by this exactly? Could you maybe share your workflow json by copy & pasting the workflow here or sharing a screenshot?

^ Even though I feel like your issue is solvable without modifying existing node, you can see the code for the OpenAI node here n8n/packages/nodes-base/nodes/OpenAi at master · n8n-io/n8n · GitHub and if you want to modify the node, you can definitely do that. See Building community nodes | n8n Docs for more info.

Hi Aya, thanks for answering. The main problem is that I cannot attach an http request to a basic llm chain, i can only attach compatible models. You can see in the screenshot, the red line connection means cannot be linked, green one of course can be.

I need to use an http request as model because i’m running the model through a custom gateway (mulesoft), how to do it without modifying the code? Or how to change the basic llm chain to accept any type of connection?

Thanks.
Edu

I see, so you want to use your own custom LLM as the Model to connect it to your Basic LLM Chain. Maybe @oleg has ideas?

yes @aya that’s exactly what I need to do. The Basic LLM chain imho should not limit the model type connection, allowing both, model node OR http request. Its possible ?

Hi @Edu_Arana,

that’s not quite possible, at least not in an easy way. The model connection expects a BaseChatModel, which does more than call HTTP endpoints; it also handles retry logic, response parsing, messages chunking and parsing logic, etc. Theoretically, you’d be able to implement support for a new model provider by using Langchain Code node, but you would need to re-implement all the methods. You could take a look at Lanchain’s Bedrock Chat Model code as an example of how this could be implemented

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.