Creating a custom llm chat model node

Is there a guideline to create a custom llm chat node? I saw some questions asking for help on specific errors on this. It would be helpful to know how to start this. The documentation for this is very confusing and some of the previously asked questions on this forum about this were left unanswered. Thank you.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

can you please share more details on what functionality you need that the existing chat node doesn’t offer?

I would like to use my own LLM

is your llm exposed over an openai compatible api? if yes, you can change the baseurl in the credentials.

I am trying to use a fine tuned version of Google’s Gemini

The issue is that I can’t authenticate. I have a curl line that I can use to authenticate into my llm. How do I use that?

Hi @Dan2

can you share a redacted version of this curl line to understand what the authentication entails exactly?
Thanks :slight_smile:

Sure, here is the curl from the POST request:
curl --location ‘< URL>’
–header ‘Content-Type: application/json’
–header ‘Authorization: Bearer < token>’
–data ‘{
“adversarial_input_check”: false,
“adversarial_output_check”: false,
“settings”: {
“model_name”: < model>,
“cache”: false,
“project”: < id>,
“streaming”: false
},
“model_name”: < model>,
“content”: {
“messages”:[
{
“speaker”: “user”,
“content”: “What is a cat?”
}
]
}
}’