Structured output parser not parsing simple JSON

Describe the problem/error/question

I have a simple WF, where I check for a very simple JSON expression. You can see the on the screenshot that the parser simply does not parse the output that the LLM clearly returns (bottom left). It always returns empty object.

What is the error message (if any)?

None

Please share your workflow

{
“nodes”: [
{
“parameters”: {
“promptType”: “define”,
“text”: "=Think step by step. First state yourself as a expert on the discussed matter. Reply on this request, solve it completely. THe request is: {{ $json.chatInput }}. ",
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
-360,
240
],
“id”: “87c3d1c9-8d0c-48f3-9bf6-4fe72e618266”,
“name”: “Primary research”
},
{
“parameters”: {
“promptType”: “define”,
“text”: “=We have a invalid reply to user request {{ $(‘When chat message received’).item.json.chatInput }} has been answered with this answer {{ $(‘Primary research’).item.json.output }}. Think through again trying to correct the answer using the knowledge of the incorrect answer and provide correct one.”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
0,
0
],
“id”: “bc4b88db-a4ab-4099-9ab4-549a680f7e3f”,
“name”: “Secondary research agent”
},
{
“parameters”: {
“promptType”: “define”,
“text”: “=REQUEST:\nThink step by step validate if question "{{ $(‘When chat message received’).item.json.chatInput }}" has been answered correctly in this satement: \n"\n{{ $json.output }}\n". \n\nRETURN FORMAT:\nAlways answer and always return a JSON.\n\nYour final output will then be:\n\n- if the question has been answered correctly in the statement return decision as "true"\n{"decision": true}\n\n- if the statement is not corretly answering the question or you are not sure or cannot say. Return decision as "false"\n{"decision": false}\n\nYou must always return a decision. Don’t use \n in the JSON output.”,
“hasOutputParser”: true,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
300,
240
],
“id”: “8423c114-8702-4b41-86b0-f49f590314c8”,
“name”: “LLM if decisision1”
},
{
“parameters”: {
“jsonSchemaExample”: “{\n "type": "object",\n "properties": {\n "decision": {\n "type": "boolean"\n }\n },\n "required": ["decision"]\n}”
},
“type”: “@n8n/n8n-nodes-langchain.outputParserStructured”,
“typeVersion”: 1.2,
“position”: [
760,
680
],
“id”: “9924f587-1d1f-402f-8745-f88b33dedb11”,
“name”: “Structured Output Parser1”
},
{
“parameters”: {
“rules”: {
“values”: [
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “loose”,
“version”: 2
},
“conditions”: [
{
“leftValue”: “={{ $json.output.properties.decision }}”,
“rightValue”: “false”,
“operator”: {
“type”: “boolean”,
“operation”: “false”,
“singleValue”: true
}
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “False”
},
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “loose”,
“version”: 2
},
“conditions”: [
{
“id”: “a63d1fbe-554a-4f1e-b91f-0a4741d735ce”,
“leftValue”: “={{ $json.output.properties.decision }}”,
“rightValue”: “true”,
“operator”: {
“type”: “boolean”,
“operation”: “true”,
“singleValue”: true
}
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “True”
}
]
},
“looseTypeValidation”: true,
“options”: {}
},
“type”: “n8n-nodes-base.switch”,
“typeVersion”: 3.2,
“position”: [
820,
400
],
“id”: “aeac6181-d136-4f1c-bc23-45a7406b1295”,
“name”: “Switch1”
},
{
“parameters”: {
“options”: {}
},
“type”: “n8n-nodes-base.splitInBatches”,
“typeVersion”: 3,
“position”: [
1040,
380
],
“id”: “3ff9a2e5-46d8-4841-a6ef-9e8aef24dfff”,
“name”: “Loop Over Items1”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
-560,
240
],
“id”: “c60c5982-41d3-4c4b-b8cf-36e9d30fc8e0”,
“name”: “When chat message received”,
“webhookId”: “c94f0a0e-f388-4606-b2eb-87c7982c477f”
},
{
“parameters”: {
“modelName”: “models/gemini-2.0-pro-exp”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatGoogleGemini”,
“typeVersion”: 1,
“position”: [
140,
680
],
“id”: “fa677535-7e2b-4550-9db4-b454a6558def”,
“name”: “Google Gemini Chat Model”,
“credentials”: {
“googlePalmApi”: {
“id”: “v3KOvROrkW9qul7y”,
“name”: “Google Gemini(PaLM) Api account”
}
}
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.outputParserAutofixing”,
“typeVersion”: 1,
“position”: [
480,
460
],
“id”: “a9801d06-4f66-41f2-9903-3716cb9cb7cf”,
“name”: “Auto-fixing Output Parser”
}
],
“connections”: {
“Primary research”: {
“main”: [
[
{
“node”: “LLM if decisision1”,
“type”: “main”,
“index”: 0
}
]
]
},
“Secondary research agent”: {
“main”: [
[
{
“node”: “LLM if decisision1”,
“type”: “main”,
“index”: 0
}
]
]
},
“LLM if decisision1”: {
“main”: [
[
{
“node”: “Switch1”,
“type”: “main”,
“index”: 0
}
]
]
},
“Structured Output Parser1”: {
“ai_outputParser”: [
[
{
“node”: “Auto-fixing Output Parser”,
“type”: “ai_outputParser”,
“index”: 0
}
]
]
},
“Switch1”: {
“main”: [
[
{
“node”: “Loop Over Items1”,
“type”: “main”,
“index”: 0
}
]
]
},
“Loop Over Items1”: {
“main”: [
,
[
{
“node”: “Secondary research agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“When chat message received”: {
“main”: [
[
{
“node”: “Primary research”,
“type”: “main”,
“index”: 0
}
]
]
},
“Google Gemini Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “Primary research”,
“type”: “ai_languageModel”,
“index”: 0
},
{
“node”: “Secondary research agent”,
“type”: “ai_languageModel”,
“index”: 0
},
{
“node”: “LLM if decisision1”,
“type”: “ai_languageModel”,
“index”: 0
},
{
“node”: “Auto-fixing Output Parser”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Auto-fixing Output Parser”: {
“ai_outputParser”: [
[
{
“node”: “LLM if decisision1”,
“type”: “ai_outputParser”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “49830d6f8cf0021425d9fba461b06d88132504abf5fcd92f52fb08503842302d”
}
}

Share the output returned by the last node

Information on your n8n setup

using the latest cloud version

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

So I know the solution. I used LLM that did not have a tool support (Gemini 2.0 pro).

1 Like

Hi @Vaclav_Soukup, great that you found out that you can use better models to give you proper output. I’d also urge you to look at the output parser: you entered a JSON schema, while the mode is set to “generate from JSON example”.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.