Cannot use custom model with Google Vertex Chat Model

I have fine-tuned a model in Google Vertex AI. The model is called quotes-test2. It was fine-tuned using gemini-1.5-flash as the base.

The message I’m getting is ‘Unsupported Model’.

If I change the model to gemini-1.5-flash in my node, it works fine.

How can I use a fine-tuned model in n8n? I’m open to using other platforms other than Vertex if any work.

Information on your n8n setup

  • n8n version: 1.80.3
  • Running n8n via n8n cloud

I am not confident as I have not personally encountered/tested, but this could be because fine tuned models from google do not support json mode.

Tuned models
Tuned models have the following limitations:

  • The input limit of a tuned Gemini 1.5 Flash model is 40,000 characters.
  • JSON mode is not supported with tuned models.
  • Only text input is supported.

Thanks for the reply @ThinkBot. Do you have any ideas on how to use any custom fine-tuned models with n8n?

This is currently a major limitation for me using n8n because I need better training options other than RAG and prompt engineering.

I know you can for sure with just using HTTP requests or some base AI nodes, I am unsure if its possible with the AI Agent node.

I was reading update logs. This could be a bug, as they did fix openai fine tuned models to now appear in the list for AI Agent nodes.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.