From n8n workflow execution, there’s not any error prompt, but the WhatsApp (My bot frontend) will return a JSON body that shown in the screenshot below in extremely rare cases.
What is the error message (if any)?
json {“route”:“billing”} Please noted that route is the key word to change to the agent defined as billing.
“functionrouter_switch
json{“route”:“billing”}” Please noted that function is the key word of tool_calls that called router_switch
I cannot share the detailed workflow, but in a simple way, I use Agent node and OpenAI Chat LLM node to invoke self-hosting DeepSeek V3-0324, all others are the tools to invoke the self-defined tool_calls, actually I’ve defined 3 Agents within all the workflows, and the System Prompt will ask LLM to decide which Agents will be called according to user inputs.
Hi there, i think you can modify your AI Agent prompt to never output the JSON return from a tool, add something like
never output tool call details or internal routing JSON in the final response. Only return human-readable replies unless explicitly instructed to output tool calls for routing purposes. All routing logic is handled silently.
For the detailed workflow I will try to get the full one exported as json, but I cannot share here by public access, if you can send me your Email that will be appreciated, thanks!
Yes, the Agent node had opened intermediate steps like this:
The purpose of intermediate steps is to ask agent to receive user inputs, and deliver to LLM, ask LLM to decide which agent need to go, and Agent node will deliver to the specific agent accordingly. It’s mandatory to open this feature as we need it to do dynamic decision making. Is it will cause this issue?
Yeah, intermediate steps is definitely what cause the AI to output the kind of JSON you are facing, because that is the full purpose of intermediate steps, to return all steps that the AI has taken, i usually only turn this on upon testing
but if you do need it then i guess we defiinitely need to exclude it somehow before going into the end user, one of the solution is like what i said above, to update the system prompt
the other solution would be to add a basic LLM chain node, or another AI Agent node, that is function to basically read the response and to check if it is human readable or no and if its not then do something else or just stop, depending on what youre trying to achieve
Hereby after adding the System Prompt like this, it will definitely get original error, looks like only when Change Agent which dynamically decide by intermediate steps.
both null and the json body are unexpected outputs.
Hereby my System Prompt, for reference:
Function Call Handling
When executing function calls, strictly adhere to these protocols:
Function Call Execution Protocol
Always execute function calls programmatically.
Never return raw function names or JSON bodies as text outputs.
Never output tool call details or internal routing JSON in the final response. Only return human-readable replies unless explicitly instructed to output tool calls for routing purposes. All routing logic is handled silently.
Invalid output example (PROHIBITED): "output": "functionrouter_switch\n\``json\n{"route":"billing"}\n```\n"`
Retry Mechanism Requirements
If a function call fails due to network errors, timeouts, or execution failures:
✓ Immediately retry the identical function call
✓ Maintain original arguments and parameters
✓ Continue retrying until successful execution
Do NOT return error messages or intermediate states to the user during retries
Critical Constraints
Never expose internal function signatures or JSON structures
Retry logic must be completely transparent to end-users
Prioritize atomic execution over diagnostic messaging
Compliance Validation
All outputs will be monitored for prohibited patterns. Non-compliant responses will trigger automatic regeneration.