Add Google Gemini as LLM Subnode Option for AI Agent [GOT CREATED]

The idea is:

Add Google Gemini as available LLM subnode option for AI Agent node.

My use case:

The latest Google Gemini models (1.5) do support function calling as documented here Intro to Function Calling with Gemini API  |  Google for Developers but are not available options for the AI Agent node.

I think it would be beneficial to add this because:

At time of writing, I’ve found Google Gemini 1.5 flash to be 10x cheaper than GPT-4o but just as good. For example, a job running GPT-4o which costs $1 I found to cost $0.10 in Gemini - for a batch of 500 jobs, this is the difference between $500 vs $50.
This could be a key consideration in productionising AI workflows.

Any resources to support this?

Missing option in “Model” subnode options for AI agent.

Are you willing to work on this?

n/a

Hello ! This feature would be indeed useful to me also.

I also propose the same compatibility with the Huggingface inference subnode :

Is there no alternative way to do this now ?

Hey @Jim_Le,

Can you try updating, We now have support for Google Vertex which brings Gemini with it.

Yes, I can confirm this is working :+1:
Cheers @Jon and much appreciation to the n8n team for making this a reality!

Just some quick notes if anyone is also interested in switching:

1. Credentials are not as straightforward as you have to use a service account.
The Roles I used were Vertex AI Service Agent and Vertex AI User. I’m not sure if you need both and needs further experimentation.

2. I hit the default quota Limits very quickly with Agent Tools.
Unfortunately, my excitement quickly died about 10mins after switching some of my workflows over to use Gemini via this node. Is it me or are the defaults are set very low?
I’m told my billing profile doesn’t qualify for a quota limit increase though so I’m kinda stuck I guess :woman_shrugging:

Will revisit in the next few weeks. If anyone else can give advice, that would be greatly appreciated!

1 Like