Some help to understand how AI Agent "Custom code tool" need to be used

@miko

Sure, I can try to explain.

1. Help Agents understand and pick their tools.
I see a common error that we put too much effort on the “how” of the tool but not enough on “why”. Let’s roleplay for a second and imagine you’re the agent and your equipped tools are like the entirety of the AWS services catalog (100+ services!). You’re asked to host a wordpress site… yikes! Where do you start? Well, I’d think you’d want to start with really clear descriptions of what problems each service (tool) solves! You’d not want to bother about how each tool works internally. This is how I think about agent tools - the clearer the purpose of the tool, the more likely the agent will decide it’s the best one for the goal it needs to accomplish.

  • Name of tool should be descriptive.
    • :white_check_mark: "get_mean_median_or_standard_deviation", "simplify_quad_equation", "create_notion_task"
    • :x: "MathStatistics", "myTool", "UploadServicesProcess"
  • Description of tool should describe purpose.
    • :white_check_mark: "Call this tool to calculate the mean", "Call this tool to send a message", "This tool creates group calendar events"
    • :x: "This tool outputs json...", "You must split messages into an array...", "This tool calls 3 functions..."

2. Agents don’t run the tools, they call them.
Think of tools calling like you would call an third party API. ie. AWS S3. When you want to put an object in S3, you don’t necessary care what internal processes are called or what datacenter was used. You just want know how to give the API the file and have it return that it was successful. This is the same with agents; knowing how you’re calculating standard deviation is not useful to them but knowing what parameters to pass in, is!

  • Define what API-like request parameters to pass to the tool.
    • :white_check_mark: Use Input schema to define request parameters (1.43+) else define in description.
    • :white_check_mark: Otherwise, define it in the tool description: { "function_name": { "type": "string" }, "values": { ... } }"
    • :x: "Call method sin() to perform sine operation", "if this ..., then do that ..."

3. If your tools still aren’t being called, it may be your LLM
Finally, the LLM model used for the agent really does matter. I’ve found the more powerful the model, the better the tools are recognised and parameters generated - the better “reasoning” that occurs overall. I suspect the more numerous and/or complex the tools given, the more context will be needed to avoid hallucinations.

  • Keep tools simple and sparse.
    • :white_check_mark: Prefer SotA models with larger context windows
    • :white_check_mark: Different models may prefer different style of prompts. Don’t expect to be able to copy and paste the same prompt across models.
    • :x: Too many tools? Try splitting into multiple single domain agents.
    • :x: Too complex? Breakdown tools into smaller tools.

Conclusion

Understanding AI tools is crucial to building great AI Agents and once you get it, there’s a lot of fun things you can achieve with them. n8n makes it incredibly easy for anyone to build and test their own agents and tools and the quick interation cycles are why I’m such a big fan!

Cheers,
Jim
Follow me on LinkedIn or Twitter
(Psst, if anyone else found this post useful… I help teams and individuals learn and build AI workflows for :dollar:! Let me know if I can help!)

Demo

9 Likes