I’m using the Cluster AI Agent node with a Structured Output Parser to generate technical qualification questions for leads (based on a service name), and I’m getting a persistent parsing error when using more advanced or structured prompts with GPT-4o.
The same setup works fine with simpler prompts or when using GPT-3.5.
What is the error message (if any)?
rust
CopiarEditar
Model output doesn't fit required format
When I inspect the raw output, I see this:
json
CopiarEditar
{
"response": {
"generations": [
[
{
"text": "",
"generationInfo": {
"finish_reason": "tool_calls"
}
}
]
]
}
}
The "text" field is empty, and the model instead returns "finish_reason": "tool_calls", which breaks the Structured Output Parser that expects a JSON-formatted string in the text field.
Please share your workflow
This is the relevant part of the flow (simplified for clarity):
Cluster AI Agent with GPT-4o
- Prompt asks for 4 to 7 technical questions in JSON format
Structured Output Parser with schema:
json
CopiarEditar
{
"type": "object",
"properties": {
"perguntas": {
"type": "array",
"items": { "type": "string" }
},
"justificativa": { "type": "string" }
},
"required": ["perguntas", "justificativa"]
}
The workflow works perfectly with this prompt:
css
CopiarEditar
You are a technical analyst. Generate 4 to 7 clear, objective questions that help understand the scope of the following service. Explain briefly why they are relevant. Respond in the following JSON format: { "perguntas": [...], "justificativa": "..." }
But fails consistently when I use a more structured prompt (e.g., one using ###, markdown, or system-like instructions).
Share the output returned by the last node
json
CopiarEditar
{
"response": {
"generations": [
[
{
"text": "",
"generationInfo": {
"finish_reason": "tool_calls"
}
}
]
]
}
}
Expected output:
json
CopiarEditar
{
"perguntas": [
"What is the estimated area for service execution?",
"Is there access to power on site?",
"What is the expected deadline?"
],
"justificativa": "These questions help qualify the lead's urgency, technical readiness, and project scope."
}
Information on your n8n setup
- n8n version: 1.76.3 (Self-Hosted)
- Database (default: SQLite): Default (SQLite)
- n8n EXECUTIONS_PROCESS setting (default: own, main): own
- Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
- Operating system: Linux
How I fixed it (temporary workaround):
I used two GPT-based assistants:
- N8N Assistant (By Nskha) — helped analyze the issue and prepared a report for my prompt engineer.
- System Prompt Generator (by neural.love) — revised the system prompt to avoid triggering
tool_calls.
After rewriting the prompt to remove markdown, formatting, and tool-style instructions — and explicitly asking for “plain JSON text with no function calls” — the issue was resolved.