Some help to understand how AI Agent "Custom code tool" need to be used

Describe the problem/error/question

I want to know or learn how to use AI tools, and i know may be my low know of coding can be some of the cause of my problem, but i need at least a good example to know how tools works:
I try to do some basic calculations with “calculator tool” but seems to be limited, so i try to create another, but the behaviour is different and it does not work, since where agent call calculator sends a syntax when calculation is needed, in othe hand the custom tool it only sends a “query” and dont follow any of the instruction of the description in the tool.
I try with (ollama) llama3:8b, llama3:70b, phy3…
Can not see any example or information about how is soppused to be used.

Please share your workflow

This is the one that im try to learn to play with AI a little, may be will be more clear to understand my problem:

Share the output returned by the last node

Wrong output type returned>

The response property should be a string, but it is an object

Information on your n8n setup

  • n8n version: 1.42.1
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Debian 12

@miko

Sure, I can try to explain.

1. Help Agents understand and pick their tools.
I see a common error that we put too much effort on the “how” of the tool but not enough on “why”. Let’s roleplay for a second and imagine you’re the agent and your equipped tools are like the entirety of the AWS services catalog (100+ services!). You’re asked to host a wordpress site… yikes! Where do you start? Well, I’d think you’d want to start with really clear descriptions of what problems each service (tool) solves! You’d not want to bother about how each tool works internally. This is how I think about agent tools - the clearer the purpose of the tool, the more likely the agent will decide it’s the best one for the goal it needs to accomplish.

  • Name of tool should be descriptive.
    • :white_check_mark: "get_mean_median_or_standard_deviation", "simplify_quad_equation", "create_notion_task"
    • :x: "MathStatistics", "myTool", "UploadServicesProcess"
  • Description of tool should describe purpose.
    • :white_check_mark: "Call this tool to calculate the mean", "Call this tool to send a message", "This tool creates group calendar events"
    • :x: "This tool outputs json...", "You must split messages into an array...", "This tool calls 3 functions..."

2. Agents don’t run the tools, they call them.
Think of tools calling like you would call an third party API. ie. AWS S3. When you want to put an object in S3, you don’t necessary care what internal processes are called or what datacenter was used. You just want know how to give the API the file and have it return that it was successful. This is the same with agents; knowing how you’re calculating standard deviation is not useful to them but knowing what parameters to pass in, is!

  • Define what API-like request parameters to pass to the tool.
    • :white_check_mark: Use Input schema to define request parameters (1.43+) else define in description.
    • :white_check_mark: Otherwise, define it in the tool description: { "function_name": { "type": "string" }, "values": { ... } }"
    • :x: "Call method sin() to perform sine operation", "if this ..., then do that ..."

3. If your tools still aren’t being called, it may be your LLM
Finally, the LLM model used for the agent really does matter. I’ve found the more powerful the model, the better the tools are recognised and parameters generated - the better “reasoning” that occurs overall. I suspect the more numerous and/or complex the tools given, the more context will be needed to avoid hallucinations.

  • Keep tools simple and sparse.
    • :white_check_mark: Prefer SotA models with larger context windows
    • :white_check_mark: Different models may prefer different style of prompts. Don’t expect to be able to copy and paste the same prompt across models.
    • :x: Too many tools? Try splitting into multiple single domain agents.
    • :x: Too complex? Breakdown tools into smaller tools.

Conclusion

Understanding AI tools is crucial to building great AI Agents and once you get it, there’s a lot of fun things you can achieve with them. n8n makes it incredibly easy for anyone to build and test their own agents and tools and the quick interation cycles are why I’m such a big fan!

Cheers,
Jim
Follow me on LinkedIn or Twitter
(Psst, if anyone else found this post useful… I help teams and individuals learn and build AI workflows for :dollar:! Let me know if I can help!)

Demo