❗ Difficulty Using Vertex AI (Gemini) with n8n Self-Hosted – Streaming Response Breaks Workflow

Hello n8n Community,

I’m currently implementing a high-level smart automation solution called NextGen Varejo: Control and Intelligence, designed for Brazilian retailers like supermarkets and pharmacies. The platform integrates directly with the ERP system and is powered by multiple specialized AI Agents (Inventory, Finance, Marketing, Customer, Social Media), all orchestrated by a central module called the MCP – Model Context Protocol.

At the heart of the AI engine, we use Vertex AI (Gemini) from Google Cloud for natural language understanding and generation, embedded directly into “Entry Agents” – smart gateways that route contextual requests to the appropriate AI agents. All of this is visually orchestrated in n8n self-hosted (v1.101.2).

:bullseye: The Problem

When using the Google Gemini and AI Agent nodes from the @n8n/n8n-nodes-langchain package, we encounter this error:

"Cannot read properties of undefined (reading 'Symbol(Symbol.asyncIterator)')"

This happens when the Gemini model tries to return a streaming response using the streamGenerateContent method – which appears to be improperly handled by the current version of the node.

:white_check_mark: What We’ve Verified

  • Authentication with Vertex AI via Service Account is fully working;
  • The Gemini model is active in the us-central1 region;
  • The Google API is being called successfully (this is not a Google Cloud error);
  • The correct endpoint (generateContent or streamGenerateContent) is being used;
  • The error happens only when the n8n node tries to process the streaming response.

:hammer_and_wrench: Workarounds in Progress

  • We’re building a dedicated sub-workflow using the HTTP Request node, manually calling the Gemini REST API as a fallback;
  • This sub-workflow will be connected via Execute Workflow to various AI Agents, until the native node handles streaming properly.

:light_bulb: Why This Matters

This is part of a live retail automation system aimed at optimizing stock, marketing, communication and pricing decisions. Vertex AI is essential to generate real-time insights, customer content and campaign ideas. For this to work in production, reliable streaming response support is crucial.


If anyone in the community has experienced this issue or found a more efficient workaround (without losing the benefits of streaming), your help would be greatly appreciated!

Thanks in advance to the amazing n8n community – your platform is incredibly powerful when paired with Google Cloud.

Best regards,
Elton Nunes dos Santos
NextGen Varejo | Belo Horizonte, Brazil
LinkedIn

2 Likes

Ahh, I keep experiencing the same issue. Any update on this ?

I’ve had the same issue. I assume you have a Structured Output Parser node connected to your Basic LLM Chain node with Auto-Fix enabled.
The fix was extremely easy for me: you delete the Structured Output Parser node, create a new one, connect to Basic LLM chain with the same structured json. This fixed it for me. ¯_(ツ)_/¯

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.