Problem using Gemini Chat Model in an agent with Output Parser

Describe the problem/error/question

I’m sending the text of a book to an agent to analyse and separate the chapters. When using the OpenAI node it works but with the Gemini node it gives me an error when parsing.

I need to use Gemini because if the book is too big it exceeds the OpenAI token limit.

What is the error message (if any)?

[ { “code”: “invalid_type”, “expected”: “object”, “received”: “string”, “path”: [ “output” ], “message”: “Expected object, received string” } ]

My workflow

Share the output returned by the last node

When using Gemini auto-fixing Output parser returns this:

[
   { 
     "action": "parse",
      "text": "{"output":"unknown"}"
   }
]

When using OpenAI returns this:

[
  { 
    "action": "parse",
    "response": { 
      "output":  {
        "capitulos": [
          { "numero": 1, "titulo": "El principito"},
          { "numero": 2, "titulo": "Índice"}, ...
  }
]

Information on your n8n setup

  • n8n version: 1.78.1
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: Windows

You can give this revised workflow structure a try.

  1. Keep the initial steps (Google Drive, Extract from File, Clean Text) as they are.
  2. Modify the connections after the AI Agent node:
  • Connect the AI Agent node’s “main” output ONLY to the Auto-fixing Output Parser node.
  • Connect the Auto-fixing Output Parser node’s “ai_outputParser” output to the Split Out node. Configure the Split Out node to split $.capitulos. The Auto-fixing parser should ideally output an object where the “capitulos” array is directly at the top level of the json data.
  • You can optionally keep the Structured Output Parser and the second Google Gemini Chat Model (Google Gemini Chat Model5) in a separate branch for more robust error handling or if the auto-fixing parser isn’t always successful. You could potentially use a “Merge” node to combine the output of the auto-fixing branch with a potentially corrected output from the second Gemini model if needed.

Reasoning for the Fix:

  • By directing the AI Agent’s output to the Auto-fixing Output Parser, you are explicitly trying to ensure the output is a valid JSON object that conforms (or is corrected to conform) to the expected structure.

  • Connecting the parsed output of the Auto-fixing Output Parser to the Split Out node guarantees that the Split Out node receives an object (hopefully with the “capitulos” array at the top level or accessible via $.capitulos) rather than a raw string from the LLM.

  • Considerations:

  • Output of Auto-fixing Parser: Examine the output structure of the Auto-fixing Output Parser. It might place the “capitulos” array directly in the json property of the output items, in which case the “Field to Split Out” in the Split Out node should be $.capitulos. If it’s nested under an “output” property, then it would be $.output.capitulos.