How to use any custom fine-tuned models with n8n?

I can’t find a way using any AI Chat Model to reference one of my own fine-tuned LLM models. This is currently a major limitation for me using n8n because I need better training options other than RAG and prompt engineering. I need to be able to train a model on large training datasets and use that model in my workflows.
I’m open to using any chat model that I can fine-tune.

You can connect any ollama model or openai compatible model with n8n. Just change the base url when setting up your credentials.

Which node are you using? Are you saying I can pass a HuggingFace url in an OpenAI chat model node?

When using “Basic LLM Chain” Node instead of AI Agent you’ll have more model options available, including hugginface. But generally speaking: as soon as you have a openai compatible endpoint, you could theoretically use it with the OpenAI node and change the base url.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.