Agent outputs [empty object] even though chat model generates a completion

Describe the problem/error/question

My agent has an output parser that filters the response either as “idea” or “message”,

messages parsed as “idea” go through the workflow while “message” returns [empty object]

##Is wort to mention that SOME times the message will be normally outputted, 1 in ~6 times

What I don’t get is why since the chat model does generate a completion that is correctly parsed:

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

[empty object]
[ERROR: Cannot read properties of undefined (reading ‘replace’) [line 1]]

Information on your n8n setup

  • n8n version: 1.89.2
  • Database (default: SQLite): Default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): I am not sure where to get that info.
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: Windows 11

Hello Luar_AS

Below I talk about your problem, however, this part of the documentation can help you a lot to better understand your question. Here is the link to the documentation - Code node common issues | n8n Docs

Missing an Output Parser
The “Require Specific Output Format” field is enabled, but you do not have an Output Parser connected, as indicated by the orange alert:

Connect an output parser on the canvas to specify the output format you require.

Without an Output Parser, the node requires a response from the agent in a specific format, but there is nothing configured to interpret or validate the format.

Test by Disabling “Require Specific Output Format”
Disable the “Require Specific Output Format” parameter.
This option forces the generated response to follow a specific format. If the agent does not return in this format, the node does not display output.
Test:

Disable the option.
Re-run the node.
Check the raw output in the JSON tab.
If it works without the option, the problem is with the Output Parser or how the instructions are configured.

The Prompt is not triggering the Agent correctly
The agent is instructed (via System Message) to expect a “niche” from the user. However:

The input sent in the chatInput variable (“I actually want house flipping”) may not be understood as a valid niche.
The agent may be poorly configured to handle the absence or uncertainty of the input received, especially if no additional data was provided as context.

I hope this helps in some way

I do have it here

It’s connected to the agent in this format:

I’ll disconnect the output parser and try it

The current chat was using data stored in the memory to answer and the output parser explains how to format the response

Thanks, I’ll test it

Hello, could you kindly mark my previous post as the solution (blue box with check mark) so that this ongoing discussion does not distract others who want to find out the answer to the original question? Thanks.

But it was not a solution, that would mark my post as solved

Please excuse me, because I understood that your case was solved.

Were you able to solve it or can I help?

I did not solve the ORIGINAL issue

I did manage to make the system work by coding a custom JSON parser and using it inside a code node instead of the native n8n parser.

So the agent is outputting a string that contains the JSON being parsed by the node in front of it

Although I could not find what was causing the [empty object] error to happen nor solved it (just found a sneaky way to go around it)

I’ve implemented improvements to your workflow.

See if it solves your problem.

Remember to check all credentials.

The systemMessage describes the JSON format in natural language, which can confuse LLM and cause inconsistent output.

The Structured Output Parser has an example that only covers the “idea” type, lacking an explicit definition for “message”.

The flow to handle responses of type “idea” involves a second call to OpenAI and a Code node, adding complexity and points of failure.

If the “AI Agent” fails and returns [empty object], the If Ready node will likely fail, as it won’t find output.response_type.

The Sanitize-inator3 node seems to be an attempt to fix formatting issues that should be resolved at the source (in the “AI Agent”).

Suggestions for the submitted flow…

Instruct LLM explicitly and inflexibly about the expected JSON format, including clear examples.

Use a complete JSON Schema in the Structured Output Parser that defines both types (message and idea).

Remove the second call to OpenAI and the Code node. The final formatting of the idea (if needed in addition to the content) should be the responsibility of the main “AI Agent” or ideas_generator (if possible). For simplicity, this version assumes that the content of the idea is sufficient for now.

Add an IF node right after the “AI Agent” to check if the output is valid before proceeding.

Delete the Sanitize-inator3 node.

Ensure that LLM (openai_llm) uses a low temperature for greater consistency.

Try this workflow and let me know if it solved your problem.

2 Likes

Thanks @interss, it looks solid! I’ll check and break it down in the meanwhile.

Really appreciate it!

I’m glad I could help.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.