Inconsistent output from OpenAI agent in Instagram reply workflow

Hi everyone, I built a workflow using n8n and ManyChat to generate Instagram replies. The workflow uses an OpenAI agent with a structured output parser, and a “formatter stable” (likely a code/formatter node).

The issue: While the workflow works most of the time, the OpenAI agent sometimes returns incorrect or malformed responses (like “Mapetrex” in the video). These wrong outputs then get processed by the formatter, causing the workflow to fail inconsistently.

Has anyone faced similar issues with the OpenAI node producing erratic structured outputs? Any advice on how to make the agent’s output more consistent would be greatly appreciated!

Here’s a practical solution based on the search results:

- Avoid using Structured Output Parser with AI Agents, as it’s known to be unreliable ([n8n.io](https://n8n.io/workflows/4316-reliable-ai-agent-output-without-structured-output-parser-w-openai-and-switch)). Instead:

* Use a basic LLM Chain followed by Structured Output Parser ([community.n8n.io](Model output doesn't fit required format - Structured Output Parser))

* Or implement manual validation with a Switch node and retry logic ([n8n.io](https://n8n.io/workflows/4316-reliable-ai-agent-output-without-structured-output-parser-w-openai-and-switch))

Also ensure your system prompt clearly defines the expected JSON format.