Gemini Error

i experiencing error in gemini model 2.5 pro, there is no output

Describe the problem/error/question

gemini 2.5 pro model have no output suddently, i already check my daily quota, i still have lots of it, so its not quota problem

What is the error message (if any)?

there is no error message but the output:

{
  "response": {
    "generations": [
      [
        {
          "text": "",
          "generationInfo": {
            "finishReason": "STOP",
            "index": 0
          }
        }
      ]
    ]
  },
  "tokenUsageEstimate": {
    "completionTokens": 0,
    "promptTokens": 15154,
    "totalTokens": 15154
  }
}

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.106.3
  • Database (default: SQLite): postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: windows

Hi FB,

Difficult to judge without seeing the workflow but the error shown in the screenshot maybe because you are setting “Require specific output format”. I would suggest testing with that turned off to make sure that the model is generating some output from your transcript and then take it from there.

I have tried specific output formats for a few workflows. If the objective is a specific JSON output format then and I have needed to specify the output format in the prompt - with an example - as well as attaching a structured output parser.

Hope that helps.

Simon

i don’t think it because “Require specific output format”. Here it screenshot without “Require specific output format” the output its empty, it’s happen occationaly without any warning, i think maybe this is another gemini bug for 2.5 pro,

Yeah, I suppose that’s possible.

Are you able to try it with another model like 2.5-flash (should still give decent results for a summarisation task).

S

The fact that you get a “STOP” in your finish response tells me the workflow stopped because of an error. What the error is not immediately evident.

My first hunch went to the Require Specific Output, but looks like you eliminated that.

Not sure if this happening in your case, but in my case, I got shut down because I was sending “too” many requests. Mind you, this is not too many tokens, it’s just that the cloud provider thought I was sending too many requests.

Typically between the client and server there are many routers that check if there is a Denial of Service attack happening. Not that I was trying that, but I was shut down. This is where my hunch was. When I waited a while and tried again, it worked.

You may be able to run this again tomorrow if this is a DOS or an email spam filter rejecting the source IP.

Can’t rule out gemini but, but it could also be a transient issue that may disappear over time and reappear depending on your testing volumes.

Not sure this helps, but just sharing my experience and intuition. Good luck.

Turn out it just gemini 2.5 pro bug. not fixed by google until now i guess

Sorry to hear about your experience. I’m not sure this is fixed. I’ve had the same troubles with my current workflow/s: they regularly – and seemingly randomly – return empty or zero-result outputs without throwing explicit errors. I have two Gemini 2.5 Pro nodes across two connected workflows, and they ‘fail silently’ as often as they succeed. During one run, one will fail, and not the other. Or vice versa. Or both will pass. Sometimes both fail. I’m curious as to whether you rebuilt the workflow in question to resolve the issue – or did it just ‘come right’ of its own accord? I’ve been through so many attempted fixes – from relaxing safety settings, to reducing input token count, but nothing appears to work. Google Cloud Console throws up no useful reporting on this… Thanks in advance.

Well. my need use of gemini pro for creative use. Right now i’m using 2.5 Flash. i saw on community you can use a unique prefix, but for me it’s still 30 70 chance (70% fail). If i really need 2.5 pro i usally build loop workflow if error gonna execute again, until in success. but right now 2.5 flash still good for me personally

Thanks for that – I’ve switched to 2.5 Flash. For the time being, it’s better than no output. :grinning_face:

I earlier commented on this from a network perspective, but through my experimentation, I have run into this problem as well as.

The solution for me was:

  1. Ensure you are on the correct “billable” google cloud project. else this won’t work
  2. Ensure you have given permissions to “Vertex User” and/or “Vertex Admin” to your project - Gemini apparently uses this and if this fails, its just doesn’t respond. That’s what happened to me.
  3. Ensure API permissions are available on the cloud project you are using Gemini on.

Looks like problem is still an intermittent problem for you. At some context length or attachments etc to the chat, vertex kicks in. Simple chats may work. Perhaps that explains the transient nature of this.

Let me know if any of the above helps. I can try to share the exact steps if needed. If it’s’ working, good luck.