Creating a custom llm chat model node

Is there a guideline to create a custom llm chat node? I saw some questions asking for help on specific errors on this. It would be helpful to know how to start this. The documentation for this is very confusing and some of the previously asked questions on this forum about this were left unanswered. Thank you.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

can you please share more details on what functionality you need that the existing chat node doesn’t offer?

I would like to use my own LLM

is your llm exposed over an openai compatible api? if yes, you can change the baseurl in the credentials.

I am trying to use a fine tuned version of Google’s Gemini

The issue is that I can’t authenticate. I have a curl line that I can use to authenticate into my llm. How do I use that?

Hi @Dan2

can you share a redacted version of this curl line to understand what the authentication entails exactly?
Thanks :slight_smile:

Sure, here is the curl from the POST request:
curl --location ‘< URL>’
–header ‘Content-Type: application/json’
–header ‘Authorization: Bearer < token>’
–data ‘{
“adversarial_input_check”: false,
“adversarial_output_check”: false,
“settings”: {
“model_name”: < model>,
“cache”: false,
“project”: < id>,
“streaming”: false
},
“model_name”: < model>,
“content”: {
“messages”:[
{
“speaker”: “user”,
“content”: “What is a cat?”
}
]
}
}’

Hi @Dan2

Sorry for the late reply!

What you could do is using the custom Langchain Code node twice, once as a root node to implement a tool calling agent, accepting tools and language model inputs. And then use the second Langchain Code node as Gemini Model.

See below:

Btw, have you tried to set the fine-tuned model in the Gemini Node itself (via Expression)? We would be curious to know if there is any particular reason why that may not be working for you.

Thanks for letting us know :slight_smile:

I tried this with the expression see below but its not working any idea ?
I select Google Gemini Chat Model and do the following


then this


the error im getting is :

So it seems to go but I have the wrong expression method

here is my curl :
curl -X POST -H “api-key: ” -H “Content-Type: application/json” https://mywebsite/app-gateway/api/v2/chat -d ‘{“userId”: “String”, “model”: “String”, “prompt”: “Text”}’

1 Like

Hi @Byron_Newberry

The field to select the model only accepts text. So in your case, this would have to be the VertexGeminiPro

Any other parameters need to be set through the Agent or LLM Chain node.

@ria I have the same use-case where I have to call our internal server where we have open source LLM deployed. I followed your example and create a root Langchain code node (Custom Tool Agent) and a Langchain code node which is a Language model. I am trying to connect some tools. The custom Tool agent seems to work when we have just one tool but when I connect multiple code tools, it doesn’t work.

here is my code in custom Tool Agent


const { ChatPromptTemplate } = require("@langchain/core/prompts");

// Get all tools from individual inputs
const tools = [];
try {
  // Try to get color tool
  const colorTool = await this.getInputConnectionData('ai_tool', 0);
  if (colorTool) {
    colorTool.schema = {
      name: "color_selector",
      description: "Returns a random color, excluding colors you specify in a comma-separated list"
    };
    tools.push(colorTool);
  }
  
  // Try to get calculator tool
  const calculatorTool = await this.getInputConnectionData('ai_tool', 1);
  if (calculatorTool) {
    calculatorTool.schema = {
      name: "calculator",
      description: "Performs basic arithmetic calculations (add, subtract, multiply, divide)"
    };
    tools.push(calculatorTool);
  }
  
  // Try to get weather tool
  const weatherTool = await this.getInputConnectionData('ai_tool', 2);
  if (weatherTool) {
    weatherTool.schema = {
      name: "weather",
      description: "Returns the current weather for a given location"
    };
    tools.push(weatherTool);
  }
  
  // Try to get quote tool
  const quoteTool = await this.getInputConnectionData('ai_tool', 3);
  if (quoteTool) {
    quoteTool.schema = {
      name: "motivational_quote",
      description: "Returns an inspiring motivational quote"
    };
    tools.push(quoteTool);
  }
} catch (e) {
  console.error("Error getting tools:", e);
}

// Get the model
const model = await this.getInputConnectionData('ai_languageModel', 0);

// Debug info about tools
console.log("Tools found:", tools.length);
console.log("Tool info:", tools.map(t => ({
  name: t.schema?.name || "unnamed",
  desc: t.schema?.description || "no description"
})));

// Get user input
const userInput = $input.item.json?.message || 
                 $input.item.json?.text || 
                 $input.item.json?.query || 
                 "What's the weather in New York?";

// Create tool description list
const toolsList = tools.map(t => 
  `- ${t.schema?.name}: ${t.schema?.description}`
).join("\n");

const systemPrompt = `You are a helpful assistant that can use tools to answer user questions.

Available tools:
${toolsList}

IMPORTANT: When you need to use a tool, use this exact format in your response:
ACTION: tool_name
ACTION_INPUT: parameters for the tool

Examples:
1. For calculator: "ACTION: calculator" followed by "ACTION_INPUT: 5 * 7"
2. For weather: "ACTION: weather" followed by "ACTION_INPUT: New York"
3. For color_selector: "ACTION: color_selector" followed by "ACTION_INPUT: red, blue"
4. For motivational_quote: "ACTION: motivational_quote" followed by "ACTION_INPUT: "

Always select the most appropriate tool based on the user's query. If no tool is needed, just respond normally.`;

// Create prompt
const prompt = ChatPromptTemplate.fromMessages([
  ["system", systemPrompt],
  ["human", "{input}"]
]);

try {
  // Get response from model
  const response = await prompt.pipe(model).invoke({ input: userInput });
  
  // Convert response to string
  let responseText = "";
  if (typeof response === 'string') {
    responseText = response;
  } else if (response && response.content) {
    responseText = response.content;
  } else {
    responseText = JSON.stringify(response);
  }
  
  console.log("Model response:", responseText);
  
  // Look for tool usage patterns
  const actionMatch = responseText.match(/ACTION: (\w+)/);
  const actionInputMatch = responseText.match(/ACTION_INPUT: (.*?)($|\n|$)/);
  
  if (actionMatch && actionInputMatch) {
    const toolName = actionMatch[1];
    const toolInput = actionInputMatch[1].trim();
    
    console.log(`Tool detected: ${toolName}, Input: ${toolInput}`);
    
    // Find the matching tool
    const tool = tools.find(t => 
      (t.schema?.name === toolName) || 
      (t.schema?.name?.toLowerCase() === toolName.toLowerCase())
    );
    
    if (tool && typeof tool.invoke === 'function') {
      // Execute the tool
      console.log("Executing tool with input:", toolInput);
      const toolResult = await tool.invoke({ query: toolInput });
      console.log("Tool result:", toolResult);
      
      // Extract the text before the ACTION command
      const beforeAction = responseText.split(/ACTION:/)[0].trim();
      
      return [{ json: { 
        response: `${beforeAction}\n\n${toolResult}`,
        tool_used: toolName,
        tool_input: toolInput,
        tool_result: toolResult,
        original_question: userInput
      }}];
    } else {
      return [{ json: { 
        response: `I tried to use the ${toolName} tool, but I couldn't find it. Available tools: ${tools.map(t => t.schema?.name).join(", ")}`,
        original_response: responseText,
        original_question: userInput
      }}];
    }
  } else {
    // No tool was used
    return [{ json: { 
      response: responseText,
      note: "No tool was used in this response.",
      original_question: userInput
    }}];
  }
} catch (error) {
  console.error("Error:", error);
  return [{ json: { 
    error: error.message,
    stack: error.stack,
    original_question: userInput
  }}];
}

In the Custom Tool Agent, I have set one input tool parameter for each tool, for example there are 4 inputs of type tool and the max connections is 1 for each of them.

Well, this is wrong. You should write it like this:
const weatherTool = await this.getInputConnectionData(‘ai_tool’, 0, 2);