Show tool calls, that did not match expected schema, in the log

The idea is:

To show agent tool calls, that failed tool schema validation, in the log. Now only calls that passed validation are listed, when an agent builds request that fails schema validation, the only thing I get is: Received tool input did not match expected schema. There is no entry in the log that represents the failed attempt to see what parameters the agent tried to pass. The idea is to create an entry in the log that shows exactly how the call looked like when it failed schema validation.

My use case:

I created a tool to list emails with parameters like emailaccount, folder, filter, select, top. When I ask to list emails, the agent randomly fails - 2/3 of tries. I don’t know why, these are quite simple parameters. I enabled the debug mode, set the agent to return intermediate steps, continue on error, but requests that fail tool schema are not present there. Only this general error message.
I get in on the docker hosted N8N 1.103.2

I think it would be beneficial to add this because:

It could allow to instruct the agent better how to interpret user requests in terms of a tool call.

Any resources to support this?

Are you willing to work on this?

Not as a developer, but I can test it :slight_smile:

In this case it looks like the input is displayed in the input section, can you use this to debug?

Hi,

I’m not sure if I understand you. In the input section you see the input for the agent. Then after a call to the LLM I get the error that tool schema validation failed, but there is no tool call in the input section to preview. Only the error. So it’s not possible to debug that, one can only guess what was the problem.

Sorry I thought your screenshot had the input to the tool selected, but it’s actually the agent input.

I’ve been looking into a similar issue, I suspect you’re experiencing what I get, where the input block shows No output data returned which is odd in itself as it references output data.

I found this thread which mentions that

  1. The model sometimes issues empty calls.
    Large‑language‑model tools sometimes send an empty {} as arguments when they don’t intend to run a tool. If your schema contains any required fields, that empty call will fail validation immediately.

I wonder if this is what were both seeing?

I think so, as I found another forum topic, that advised to explicitly instruct the agent it the main prompt to never leave null parameters and empty objects in tool calls to avoid errors, or to use fromAI() funtion and always provide a non-null, non-emptyobject default value for all params.

And it works, but this knowledge was hard to obtain, so there should be easier way to understand those “the tool call did not match expected schema” situations - by either call preview or the URL to the KB article in the error trace.

Totally agree.