Google Vertex Chat Model doesn't Support Imagen Models

Describe the problem/error/question

I’m trying to use an AI Agent Node with the Google Vertex AI Chat Model.
I want to generate an image based on parameters in my workflow.

Here’s the Google Cloud Platform Documentation for the image generation models: Google models  |  Generative AI on Vertex AI  |  Google Cloud

I’ve tried specifying the Imagen 3 and Imagen 2 available models, but the workflow execution fails because any of the Imagen family models seems to be unsupported.

What is the error message (if any)?

Error in sub-node ‘imagen-3.0-generate-002‘ Unsupported model

Please share your workflow

Share the output returned by the last node

{
  "errorMessage": "Error in sub-node ‘imagen-3.0-generate-002’",
  "errorDescription": "Unsupported model",
  "errorDetails": {},
  "n8nDetails": {
    "nodeName": "imagen-3.0-generate-002",
    "nodeType": "@n8n/n8n-nodes-langchain.lmChatGoogleVertex",
    "nodeVersion": 1,
    "itemIndex": 0,
    "time": "2/20/2025, 6:28:29 AM",
    "n8nVersion": "1.78.1 (Cloud)",
    "binaryDataMode": "filesystem",
    "stackTrace": [
      "NodeOperationError: Error in sub-node imagen-3.0-generate-002",
      "    at ExecuteContext.getInputConnectionData (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/node-execution-context/utils/get-input-connection-data.js:87:23)",
      "    at ExecuteContext.getInputConnectionData (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/node-execution-context/execute-context.js:36:16)",
      "    at ExecuteContext.toolsAgentExecute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/agents/Agent/agents/ToolsAgent/execute.js:67:19)",
      "    at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/agents/Agent/Agent.node.js:383:20)",
      "    at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:633:19)",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:882:51",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:1216:20"
    ]
  }
}

Information on your n8n setup

  • n8n version: 1.78.1
  • Database (default: SQLite): None
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: Mac OS

Hi! Currently the Agent node does not allow you to output anything other than text. To work around that, here are some options:

  1. Use a different node: we offer integrations with other image generation services. For example, you could use the OpenAI node to generate images using DALL-E / GPT-image-1.
  2. Custom HTTP Request: If you specifically need to use Google’s Imagen models, you might need to create a custom HTTP Request node to interact directly with the Google API. This would require a bit more setup and figuring out how to call the API.
  3. Feature Request: You could submit a feature request to add support for Google’s image generation models in future updates. (It might already be on there, because we have flagged this before…)

Sorry for not having better news for using the google models.