I am consistently running into a Cannot read properties of undefined (reading 'Symbol.asyncIterator') error when trying to use the AI Agent node with Google Vertex Chat Model.
This error persists even in a minimal test case, which suggests a deeper issue or bug.
My Setup:
n8n Version: 1.102.3
Hosting: Self-Hosted on Railway
Node:AI Agent
Chat Model:Google Vertex Chat Model using gemini-1.5-pro
What I have tried so far (extensive debugging):
Simplified the workflow to a single, standalone AI Agent node.
Used a simple, hardcoded prompt (âSchrijf een kort gedicht over een fietsâ) with no variables. The error still occurs.
Checked Google Cloud Permissions:
The Vertex AI API is enabled.
The Cloud Resource Manager API is enabled.
The Service Account has been granted both the Vertex AI User and Service Account Token Creator roles.
Checked Node Configuration:
The correct Google Cloud Project ID is selected.
Toggling the âRequire Specific Output Formatâ on and off makes no difference.
This leads me to believe itâs not a configuration error on my end, but potentially a bug.
I came across this same issue today when doing Vertex testing. I found in the Google Vertex Chat Model the default Model Name N8n populates is âgemini-1.5-flashâ. When I change it to âgemini-2.5-flashâ it works as expected. I suspect the issue is likely with your model name not matching what Vertex expects.
Iâm running a very simple test workflow with basic nodes: a Google Vertex node connected to a chat input. However, even though my Google Vertex API is configured correctly, I keep getting the following error:
Cannot read properties of undefined (reading 'Symbol(Symbol.asyncIterator)')
The thing is, Vertex AI has specific model names that you need to use, and âgemini-2.5-flashâ isnât one of them. Instead, you can use models like âgemini-1.5-flash-001â or âgemini-1.5-pro-001â.
I tried the AI Agent, Basic LLM Chain, I changed the different version of Gemini 1.5 and 2.5 and I get the same âasyncIteratorâ error. Http Requests work to my vertex ai environment but definitely is a lot more work than the Vertex AI node. Anybody found a fix yet? Hopefully this gets fixed in the next releases
I had the same issue and tested different model names (all names tested are valid in the official Google Python SDK google-genai). Some models work but some donât.
Please upvote this bug report on Github to possibly get it fixed sooner.
I dont know if this helps anymore but for other readers: If the model is not hosted in the specified region (for example you cant use 2.5 pro in eu3 frankfurt) then you will get the same error. Sometimes it helped to reassign the correct project (by ID instead of by picking it from your list).
I guess @jensus is right.
There is already an open github issue. In the code, n8n is setting the region via the credentials used. As in the Google Docs stated, the model availability for e.g. europe-west3 (Frankfurt) is very restricted. Therefore you can set the region in the Google Vertex Credentials used for the model (e.g. to âeurope-west4â).
In case this is helpful for anyone else that comes across this thread, I was having the same issue and it turned out that the Service Account required a role associated (I used âViewerâ) for whatever reason.
I am having a similar issue. But I am only experiencing the problem when I have a Postgres Chat Memory node connected to the same AI agent as the Vertex AI model. When i disconnect the Postgress Chat Memory node it works fine.
You may or may not have the same root cause, but it is likely just that âsomething connectedâ is sending a normal error code, and n8n node canât handle normal error codes, so it bugs out and makes its own unrelated error which tricks you to look for unrelated issues/troubleshooting.