Describe the problem/error/question
Hi everyone,
I’m currently trying to build an HTTP Request node in n8n to send a prompt to the Gemini image generation model.
The problem is that no matter what I do, the node always returns json:false
. Because of this, it seems like the Gemini API server can’t properly interpret my request.
(API key in this image is already deleted!)
I’ve already confirmed that I have access to the gemini-2.0-flash-preview-image-generation
model by sending the following request:
GET https://generativelanguage.googleapis.com/v1beta/models?key=YOUR_API_KEY
Also, when I send the exact same request in Postman, I get a valid base64 image result. This tells me that the issue is specific to n8n.
Here’s what I’ve already tried in n8n:
- Set the Body Content Type to
RAW
, and pasted in the JSON body. - Turned Send Headers toggle on and manually added
Content-Type: application/json
.
Nothing works — the output always shows json:false
.
I’ve discussed this with ChatGPT in detail and tried many different setups, but the conclusion even there was that this might be an internal issue with how n8n handles the request formatting.
Could anyone help me out here?
Thanks in advance!
What is the error message (if any)?
The resource you are requesting could not be found
models/gemini-2.0-flash-preview-image-generation is not found for API version v1beta, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.
Please share your workflow
(API key in this node is already deleted!)
Information on your n8n setup
- Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
- Operating system: window 11