Does the Structured Output Parser use LLM API schema field?

Hey everyone, I was running a simple prompt with a basic LLM chain and a structured output parser. After 10-15 runs, one failed because it produced invalid JSON. My JSON schema is straightforward:

{
  "type": "object",
  "properties": {
    "schedule": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "hour": {
            "type": "integer",
            "minimum": 0,
            "maximum": 23,
            "description": "uses 24 hour format"
          },
          "minute": {
            "type": "integer",
            "minimum": 0,
            "maximum": 59
          },
          "second": {
            "type": "integer",
            "minimum": 0,
            "maximum": 59
          },
          "task": {
            "type": "string",
            "enum": ["search_people", "view_person", "connect_person"]
          }
        },
        "required": ["hour", "minute", "second", "task"],
        "additionalProperties": false
      }
    }
  },
  "required": ["schedule"],
  "additionalProperties": false
}

I checked the logs and noticed my prompt had this appended:

You must format your output as a JSON value that adheres to a given "JSON Schema" instance.

"JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.

...

Does this mean that n8n is not sending the JSON schema through the specific schema field in my LLM provider’s API? For example, OpenRouter’s REST API has a response_format field like this:

{
  "messages": [
    { "role": "user", "content": "What's the weather like in London?" }
  ],
  "response_format": { <----------- specific schema field
    "type": "json_schema",
    "json_schema": {
      "name": "weather",
      "strict": true,
      "schema": {
        "type": "object",
        "properties": {
          "temperature": {
            "type": "number",
            "description": "Temperature in Celsius"
          }
        },
        "required": ["temperature"],
        "additionalProperties": false
      }
    }
  }
}

If so, does that make the structured output parser unreliable?

Don’t rely entirely on the Structured Output Parser for critical flows.

Design additional verification flows. Use a Function Node or Code Node to manually parse and validate. Or use a second LLM to confirm that the structure is valid (for example, with a hardened prompt).

Stay tuned for future updates, as there are requests to use with_structured_output in LangChain and the native LLM API, which could allow hard schema submissions.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.