I’m building a Sales Chatbot using the AI Agent node (LangChain) with a multi-agent architecture (Orchestrator → Sub-Agents → Tools).
I am facing a specific issue with my Scheduling Agent. It has a tool called get_availability which triggers a workflow to fetch available slots from Cal.com.
The Problem:
The tool executes perfectly. I can see in the execution inspector that the get_availability node returns a valid JSON object containing the available time slots. However, the AI Agent (LLM) seems to “ignore” this data. It either responds that there are no slots available or hallucinates/invents time slots that are not in the JSON.
The Setup:
Model: GPT-4o-mini
Architecture: The Orchestrator calls the “Scheduling Agent,” which calls the get_availability tool.
Tool Output: The tool returns a JSON object structured by date.
What I observe:
The Agent calls the tool correctly.
The tool returns data like this (verified in the Output Inspector):
Immediately after receiving this, the Agent generates a response saying it couldn’t find information or generates a random time, completely disregarding the JSON above.
What I’ve tried:
I removed the “Thinking” tools to reduce noise.
I explicitly added rules in the System Prompt telling the agent to read the “data” property.
I simplified the tool description.
My Question:
Has anyone experienced the AI Agent being “blind” to complex JSON outputs from tools? Is the nested structure of the JSON (where the key is a dynamic date string like “2026-01-19”) confusing the LLM?
Should I flatten the JSON output before sending it back to the agent, or is there a specific prompt technique to force the agent to read this output?
This issue typically occurs when the LLM isn’t properly interpreting the tool’s JSON output. Try these steps:
1. In your tool’s description, explicitly state the expected JSON structure and how to use it, like: “Returns availability slots in {data: {date: [{start, seatsRemaining}]}} format”
2. Add a specific instruction in your agent’s system prompt such as: “Always check the ‘data’ property in the tool response for available slots before responding”
3. Consider simplifying the JSON structure if possible - sometimes nested objects can confuse the model.
If that doesn’t help, share your exact tool description and system prompt so we can spot any inconsistencies.
I’ve had issues like this before, I usually try a different model and refresh the memory, but ur ai doesn’t have memory attached so disregard that, you could try flattening the json.
No uses los agentes que son tools, causan muchos problemas.
Modulariza todo y manda a llamar otros agentes como sub-workflows. Eso debería resolver tu problema.
LLM’s is all about context, something tells me that information is lost in the procces. Could you share the JSON of your get availability tool?
Totally not backed by research, but i have experienced LLM falling flat when using nested json data, i have good results using XML, and plain text/markdown.
From the prompts alone, it’s not possible to be absolutely certain what you’re trying to achieve. However, based on the problem you described, what I would do is make the tool’s output more fully resolved.
I would implement the logic “If data has at least one date key with at least one slot where seatsRemaining > 0, then slots exist” inside an IF node, and then format the final user-ready message in a Set node.
With that in place, I would simplify the prompt so it only needs to send the message already prepared by the tool, reducing both complexity and prompt size.