Hi, in my case the problem was that if agent had wrong format output on the first try, then the whole request failed even if it corrected the output on the retry. Any correct retry was ignored and there was next until max number of retries and whole output was failed. They fixed that. There is no guarantee that an agent would prepare the output in correct format. That depends on the prompt and the LLM model you use. IMO it’s n8n independent issue. The best you can do is take it into account and 1) simplyfy the output so it’s easier to prepare it valid 2) experiment with the prompt so the agent has straightforward criteria to build the output, and 3) retry failed attempts and monitor your agent success ratio so it doesn’t burn to much tokens failing.
That’s weird! In the same process, I will report an error in the structured output of agent, but my colleague can output it correctly when running it. …
I have the same error using a basic LLM chain and a structured output parser (version 1.101.2). My solution is (1) up to 3 retries and (2) continue on error.
The problem i have is the structured output includes a parent ‘output’ key that made the entire json invalid. Theres a github issue about it.
The continue on error was not sending the right results,so i disabled the requires structured output, and then just pasted that json template into the llm instructions and told it to put the putput in that format with no other text. This works fine with gpt4o model
This is really annoying, I have the same issue.
When I look at the error and the input the parser receives, it matches my structure.
What is the point of having the option if its not reliable in an AI Agent?
Sorry its a bit frustrating.
Update Ok I hope this might help find out if this is an issue that can be fixed.
I did some tests and was able to pinpoint the issue on my end.
The following works fine when the AI Agent model passes it to the parser.
{
"action": "parse",
"text": "{\"output\":{\"instructions\":\"Use `$match` to find the latest created project from the `projects` collection, sorted by a `createdAt` field.\"}}"
}
But this fails:
{
"action": "parse",
"text": "{\"output\":{\"instructions\":\"Use `$match` to find the latest created project from the `projects` collection, sorted by `createdAt` or a similar field. The MongoDB query to achieve this would typically look like this: \\n\\n```json\\n{ \\\"$match\\\": { \\\"createdAt\\\": { \\\"$exists\\\": true } } }\\n```\\n\\nThis query checks for documents in the `projects` collection where the `createdAt` field exists.\"}}"
}
It only seems to fail when the code blocks appear in the text. Could be also the escaping, but yeah it seems to be failing every time because of that.
If the model transforms the input and removes them, it passes no problem.
Update 2 Adding the following in my system message for my AI Agent that has the parser seems to fix it in my use case.
Remove the ``` (code block) markers and incorporate the content as plain text in the input before parsing.
I assume this is something that can be fixed in the parser directly?
Having a similar issue on a workflow that previously worked fine (for months).
I a m running a self-hosted (railway) n8n instance. v: 1.112.4
Error occurs when using an AI Agent Node.
Workaround is to use an Simple LLM Chain instead of the AI Agent Node.
Here’s my required structured output from my output parser (set to Auto-Fix with GPT-5-mini):
```
{
“weirdFact”: {
“type”: “array”,
“items”: { “type”: “string” },
“description”: “Weird or overlooked facts nobody talks about.”
},
“shockingTruth”: {
“type”: “array”,
“items”: { “type”: “string” },
“description”: “Counter-intuitive insights that surprise those who think they understand the topic.”
},
“historicalContrast”: {
“type”: “array”,
“items”: { “type”: “string” },
“description”: “How the story looked roughly 50 years ago—then-versus-now snapshots.”
},
“insiderJoke”: {
“type”: “array”,
“items”: { “type”: “string” },
“description”: “Humorous tidbits or running jokes insiders share.”
},
“simpleExplanation”: {
“type”: “array”,
“items”: { “type”: “string” },
“description”: “Plain-language breakdowns that make a complex idea feel obvious.”
},
“unbelievableTruth”: {
“type”: “array”,
“items”: { “type”: “string” },
“description”: “Facts that sound made-up but are completely true.”
}
}
```
UPDATE: Workaround is to use a simple LLM chain + structured output.
UPDATE 2: Another workflow is giving me the same issue (even with a Simple LLM Chain)
For me is working fine with OpenRouter Chat Model. It didn’t work for me Claude or Google Gemini chat model, but when I switched to OpenRouter it worked directly. From OpenRouter you can choose multiple Model options
For me is working better deleting the characters ``` in the agent prompt somthing like…
First part of agent prompt …
```json YOUR_JSON ` ` `
Final part of agent prompt.
and only keeping the json content in prompt without line spaces. Also, I’ve added a Auto Fix model with GPT 3.5 Turbo 0125. And finally I’m using response Format option in the model configuration with Json Schema corresponding with the hoped exit json. Don’t forget mark the Strict switch.
I hope that will be usefull for you.
PD: I don’t know if is really usefull but the answer of tkris improved my results. Basically you must change the default structured output parser name to “format_final_json_response_<ID, NAME….>“.
And, if it still doesn’t work, just add more prompt in the system messages. For me I just pasted the output format in the system messages again. (But for some reason I think it runs slower…) I also changed the name to “format_final_json_response_<ID, NAME….>“ like tkris. Don’t know if it really works.
And also thanks to Kyovint, I changed my auto fixed model to GPT 3.5 Turbo 0125 although it never runs (ChatGPT 4 latest before). But I didn’t mark the Strict switch (actually I don’t know what is that.)
Update: And I changed the AI agent node to LLM chain like what Valentin_Ffrench did. It finally works stably.