Gemini API - Imagen3

Describe the problem/error/question

Google released 2 days ago the Imagen3 via the Gemini API. I tried to get it up and running via the Basic LLM node, with the Gemini chat model attached. But it doesn’t work. I tried also the HTTP request node, but also no success either.

Question: have anybody got it up and running? I am curious

What is the error message (if any)?

Error in sub-node ‘Google Gemini Chat Model‘

[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/imagen-3.0-generate-002:generateContent: [404 Not Found] models/imagen-3.0-generate-002 is not found for API version v1beta, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.

Please share your workflow

{
“nodes”: [
{
“parameters”: {},
“type”: “n8n-nodes-base.manualTrigger”,
“typeVersion”: 1,
“position”: [
-120,
-100
],
“id”: “1e839a3a-cbca-4e0a-bc59-7c9f6f520e55”,
“name”: “When clicking ‘Test workflow’”
},
{
“parameters”: {
“method”: “POST”,
“url”: “https://generativelanguage.googleapis.com/v1beta/models/imagen-3.0-generate-002”,
“sendQuery”: true,
“queryParameters”: {
“parameters”: [
{
“name”: “key”,
“value”: “API_KEY_HERE”
}
]
},
“sendHeaders”: true,
“specifyHeaders”: “json”,
“jsonHeaders”: “{\n "Content-Type": "application/json"\n}”,
“sendBody”: true,
“specifyBody”: “json”,
“jsonBody”: “{\n "prompt": "Fuzzy bunnies in my kitchen",\n "generationConfig": {\n "numberOfImages": 1\n }\n}”,
“options”: {}
},
“id”: “88996d04-92c6-4270-a8d1-cb497b42df18”,
“name”: “HTTP Request”,
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 4.1,
“position”: [
80,
-100
]
}
],
“connections”: {
“When clicking ‘Test workflow’”: {
“main”: [
[
{
“node”: “HTTP Request”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “298862c074987b1d179f9acc3db122c9137fb21c956dbda9ee10caa63ecf373d”
}
}

Share the output returned by the last node

Information on your n8n setup

  • n8n version: latest
  • Database (default: SQLite): standard
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): self-hosted
  • Operating system: macos

Created a simple Fast API for it and the called this locally. Problem solved.

1 Like

Hi @Thomas_B

Nice you found a way. Could you share some info? I don’t think I could create a simple fast API but would like to know what your approach has been.

Thanks

The best option we have here only by n8n is HTTP Node, @Sebastian2 a basic HTTP node would do the job (with less flexibility)

Could you explain this a little better? Imagen is getting more and more powerful for free so I think implementing this could be a pretty big help to a lot of people.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.