Hey everyone, I was running a simple prompt with a basic LLM chain and a structured output parser. After 10-15 runs, one failed because it produced invalid JSON. My JSON schema is straightforward:
{
"type": "object",
"properties": {
"schedule": {
"type": "array",
"items": {
"type": "object",
"properties": {
"hour": {
"type": "integer",
"minimum": 0,
"maximum": 23,
"description": "uses 24 hour format"
},
"minute": {
"type": "integer",
"minimum": 0,
"maximum": 59
},
"second": {
"type": "integer",
"minimum": 0,
"maximum": 59
},
"task": {
"type": "string",
"enum": ["search_people", "view_person", "connect_person"]
}
},
"required": ["hour", "minute", "second", "task"],
"additionalProperties": false
}
}
},
"required": ["schedule"],
"additionalProperties": false
}
I checked the logs and noticed my prompt had this appended:
You must format your output as a JSON value that adheres to a given "JSON Schema" instance.
"JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.
...
Does this mean that n8n is not sending the JSON schema through the specific schema field in my LLM provider’s API? For example, OpenRouter’s REST API has a response_format field like this:
{
"messages": [
{ "role": "user", "content": "What's the weather like in London?" }
],
"response_format": { <----------- specific schema field
"type": "json_schema",
"json_schema": {
"name": "weather",
"strict": true,
"schema": {
"type": "object",
"properties": {
"temperature": {
"type": "number",
"description": "Temperature in Celsius"
}
},
"required": ["temperature"],
"additionalProperties": false
}
}
}
}
If so, does that make the structured output parser unreliable?