I am finding regularly if an AI Agent gets an error while using a tool, if the problem is with what it is sending to the tool, you never see the input data.
Input data to the tool only seems to show in the log after it runs without an error - which makes troubleshooting very hard!
Or am I looking in the wrong place? I would have thought the log should have as input, then output is the output or the error message.
But you only see Input when the tool call succeeds
Any insights on getting better troubleshooting in these scenarios would be awesome!
If it fails but gets a correct response from the tool, you see the input.
What is the error message (if any)?
Example: Bad request - please check your parameters invalid data
This is a frustrating limitation with AI Agent tool execution logging. When a tool errors out before completing, n8n often does not capture the input parameters the agent attempted to send, only the error response.
Your best workaround is adding a manual logging step inside the tool itself. Use a Sticky Note or Set node at the start of each tool workflow to capture incoming parameters, or enable detailed workflow execution logging in settings.
You can also wrap tool calls in error workflows that log inputs before processing, so you always see what was sent even when it fails.
Ahh Thanks, but i dont have any issues sending items tot he AI agent.
The problem is the AI agent sending the FROMAI items to a tool it has access to.
Like this, its when the ai calls a tool and sometimes it will send invalid data to the tool requirements, but cause I cannot see the input, I cannot troubleshoot how to prevent it
Here’s an example of one of my chat agents, pulled from the same example I used above. In this case, I’m evoking Tavily, for Internet pulls. Here’s what I would do: Set the workflow to inactive, and trigger the workflow, give it an input that would call the tool. In my case, I asked it for tomorrow’s weather forecast. Let the workflow execute. Once it completes, double click the tool node, and look at the log. I’ve included snippets of what I see in Schema and JSON. You can see what query was sent to the tool, usually TEXT, and how it responded. The Logs will also include error messages, if any. If you do not see this, it’s an issue with the tool node, and if it wasn’t written by you (in most cases, not) you will need to reach out to the Dev Team that supports that tool. Remember, all tools are just HTTP Requests with an API call, that’s been wrapped up in a nice package. If you understand the HTTP Request Node, you should be able to write your own tool. Click on Tool in the AI Agent, and you should see the available options shown in my last screen snippet.
Thanks for the extra detail, can you try this.
Put an extra character to break the json going into Tavily.
Like in the tool add some random things so it is not a valid json to send to it. So the tool gives an error response and not a success one. then you will see what I mean.
If it was a direct item from the user message then I could see it,
But the tool uses {{ $fromAI(‘User’, ‘user alias’, ‘string’) }} so it depends on the ai agent to put input of the tool into its required field.
And its when something is sent that is wrong like broken json or similar (dont know cause I cannot see what the input is the agent decided to send) is the problem here
I wasn’t able to inject extra characters into the JSON and get it to run without an obvious “error found in…” message. But what I WAS able to do was use the Call N8N Workflow Tool, and send the data to that. The first node I used was a NoOp node which showed me the input string. This will at least show you what is getting sent to the tool. You can then add logic to the sub-workflow to clean up the Input and pass good JSON to the tool. Several ways to do this, Set Node, Code Node, there’s even a public worlkflow that should point you in the right direction.