Request for Custom Subnodes Feature

My manager asked me to check whether or not it was possible to create a subnode for a LLM model that wasn’t integrated in n8n. Seeing that custom nodes exist, I believed it was possible and thus asked for a few days to check my hypothesis. Today, as I was reading the interfaces in the package n8n-workflow, I found that there are no interfaces that allow for custom subnode creation.

That is unfortunate for two reasons: 1) we had a potential client that wished to contact us to create a workflow with a private LLM model and 2) my company uses n8n Cloud, and therefore it would be impossible to change the source code of n8n in order to accept whatever interface I created and then commit said changes; that renders any attempt at creating a chat model subnode worthless. We might have already lost the client as far as I’m concerned, but I do believe that others may benefit from the idea.

If there are other ways to make what I just described, I’d be pleased and glad if you reply with it so I can show it to my manager!

Seeing as it is part of my job, I’m more than willing to work on the creation of the feature myself! Just be aware that I’m an intern with 4 months of experience and you’d be practically working with a nobody.

You can make requests from an HTTP request, but you can’t add them directly.

If you could explain the purpose of doing this, I could see if there are more options.

See, the purpose would be to replace that one OpenAI subnode that you can add to an AI Agent. You know the chat model that is required in order to make an AI agent work? Yeah, this hypothetical subnode would replace it. I’m aware of the HTTP Request solution and in fact it was the first one the other intern and I suggested to the manager - it’s still our placeholder solution if we cannot find anything else, but it does raise a problem if we need to work with tools and memory, you know?

Which LLM are you going to use?

I wasn’t informed. All that I know is that it’s a private model, owned by a company from around these parts. I told my manager that creating a custom node would imply it would need to be public, and he said that was no problem; but the exact model, I do not know. The main idea is that it was supposed to be a subnode that supported any, but I’m aware that isn’t possible. All I wanted was a template so I could adapt it, but for that to exist, there’s the necessity of a new interface on the source code, I think.

I am aware it is possible to “trick” the workflow into thinking it is calling OpenAI if the LLM is based on it, all you need to do is change the Base URL. But that wasn’t enough, as we need to ensure that it’d work no matter what the model is based on

You could integrate it into ollama, Ollama Chat Model tool to connect it to an ai agent in n8n,

or maybe huggingface?

Hope this helps

Oh, interesting! I had forgotten there are programs where you can deploy LLMs to call them later. I’ll tell him about that possibility today to see how he thinks that might work in the longrun. Thanks a lot!

Yer and the best one I found is probably https://replicate.com/ check it out too :slight_smile: Hoping he invests :slight_smile:

1 Like

Just came back from talking to my manager. Another possible solution would be to create a proxy that masks the LLM to a known model. I’ll see what I can do with the responses thus far, but do consider adding this feature if possible, as it’d be of great help. Thanks to everyone!

1 Like