I am trying to replicate the behavior of the native Google Gemini “Analyze Audio” node using a manual HTTP Request node(https://aiplatform.googleapis.com/v1/publishers/google/models/gemini-2.5-flash:generateContent) to hit Vertex AI, but I’m seeing significant differences in results and token usage. I have a few questions about how the native node is configured “under the hood”:
What are the default Temperature, Top-P, and Top-K values used by the native node?
Does the native node include any hidden System Instructions or “System Prompt” wrappers that aren’t visible in the UI?
Information on your n8n setup
n8n version: 1.123.18
Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
I also did some research and also checked the N8N documentation for the node, and I couldn’t find any details on temperature, Top-P, Top-K either. However, this is just a guest but the default might match the default through the actual correct URL, though I’m not sure.
I didnt see any system instructions but you can use the text input to help describe the audio, for a better response. You can also set the max tokens it returns for the analysis description.