Describe the problem/error/question
I’m trying to send text to the models/gemini-2.5-pro-preview-tts
model using the Basic LLM Chain node in n8n, connected to the Google Gemini Chat Model.
There’s no way to configure the required responseModalities
parameter, which results in failure.
What is the error message (if any)?
[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-pro-preview-tts:generateContent: [400 Bad Request] The requested combination of response modalities is not supported by the model. models/gemini-2.5-pro-preview-tts accepts the following combination of response modalities: * AUDIO
Please share your workflow
Share the output returned by the last node
[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-pro-preview-tts:generateContent: [400 Bad Request] The requested combination of response modalities is not supported by the model. models/gemini-2.5-pro-preview-tts accepts the following combination of response modalities: * AUDIO
Information on your n8n setup
- n8n version: 1.93.0
- Database (default: SQLite): SQLite
- n8n EXECUTIONS_PROCESS setting (default: own, main): how i know?
- Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
- Operating system: Ubuntu 24.04.2 LTS