Hi n8n Team,
I am currently testing the new Gemini 3.0 Pro Preview (gemini-3-pro-preview) inside the AI Agent node (using the Google Gemini Chat Model node).
While simple chat works fine, the agent crashes immediately when attempting to use Tools (Function Calling).
The Behavior:
-
The Agent correctly identifies the tool and sends the input.
-
The Tool (a sub-workflow) executes successfully and returns data (I can see the correct output in the execution log).
-
The Crash: When the Agent tries to process the tool output to generate the final answer, it throws a
400 Bad Requesterror from the Google API.
The Error Message:
[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-3-pro-preview:streamGenerateContent?alt=sse:
[400 Bad Request] Function call is missing a thought_signature in functionCall parts.
This is required for tools to work correctly, and missing thought_signature may lead to degraded model performance.
My Analysis: It seems that Gemini 3.0 (and likely the 2.0 Flash Thinking models) enforces a “Chain of Thought” process. The API sends thought_signature tokens during the tool call request. It appears the current n8n node implementation does not store/pass this thought_signature back to Google when sending the tool results (function response). Google then rejects the context as invalid because the “thought chain” is broken.
My Question: Is there already a fix or a workaround for this? Or can we expect an update to the Google Gemini node soon to support the new “Thinking/Reasoning” requirements of Gemini 3?
Currently, I have to revert to gemini-1.5-pro, which works flawlessly with the exact same setup.
Thanks for your hard work!