I’m trying to make gemini thinking work with ai agents but I can’t seem to figure out how. I’m aware that the parameter isn’t present in the gemini node model and that I can’t use classic http requests as models for ai agents. What I’m trying to figure out is, if by any means, there is a way I can do this. My aim is to question the first ai agent for a thought response, pass that response to another ai agent which should act as a revisor, and then return that answer to me if it meets certain criteria. Is there a way I can do this? I managed to do the correct http request with gemini and thinking, but can’t manag to figure out if this is possible.
Share the output returned by the last node
Information on your n8n setup
n8n version: 1.103.2
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
Thanks for your answer Abrar, what I was wondering about was how to connect the nodes.
I can’t seem to use an agent node without a model, should I use the Code Node as a model? So, from what you say I figure I can’t be using an HTTP Request node, am I right?
AFAIK, Gemini has built-in thinking mode (in 2.5pro model), which will activated automatically when it needed complexes solutions. You don’t need to use ‘Thinking’ tool for that.
Yes, I also read that from the docs, but from what I’m seeing thinking doesn’t seem to trigger for my prompts without explicitly passing the thinkingBudget parameter. That’s why I have troubles understanding what is the best approach to ensure Gemini is thinking and understand if it can be done with a model node or I should use something else (in which case, how to connect to other nodes to obrain an agent-like behavior)
Ah. I got what you mean.
Since gemini model in the n8n still doesn’t have the configuration for setting thinkingBudget and see if the Gemini is thinking or not.
The best approach is to use HTTP request, and monitor the request in Google Console Platform.
Since in newest stable version of the n8n already added Gemini node (not model or tool), you can use that and trying to make a Custom API Call with that.
I don’t know if it’s from n8n issues or LangChain.JS issues
Thanks! Since as I showed in the concept workflow before it should have memory, is there any way I can implement it with the direct call without an agent? I don’t need the agent-like node to activate any tools, it’s enough for it to have context memory.
Thanks, I didn’t know that chat memory manager node was a thing, still getting familiar with n8n. I took a look at message a model node, but it seems that custom HTTP request still needs the HTTP Request node, I still can’t seem to find the thinkingBudget parameter sadly, I’ll do some more digging
Thanks for the answer, I didn’t mark it as a solution because it didn’t solve my problem sadly. With the node you provided, if we try to do a custom HTTP Request, we’ll be redirected to HTTP Request node which, sadly, doesn’t seem to work with the memory manager since the output of the http request isn’t saved there. The main issue with that is that the models should communicate each other and the document agent should iterate on what the reviewer agent says to improve documents.