Best practice for building AI agent tools – built-in nodes or Call Workflow?

Hi,
I’m building an AI agent in n8n, using tools that the agent can call to perform different tasks (e.g. querying APIs, updating a database, etc.). I’m wondering what is the best approach for structuring these tools:

  • Should I build all tools directly as nodes within the same workflow as the agent?
    or
  • Should I use the Call Workflow node and create each tool as a separate sub-workflow?

What are the pros and cons of each method, especially in terms of performance, maintainability, and passing context between tools?

Thanks in advance for any insights or recommendations!

I’m far to be a pro, but personnaly, i make a little workflow for each tool. Then you can re use it with other agent if needed. More than that, you will be able to upgrade your tools if new nodes or sevices are developped.

1 Like

Ehy @Kuzry,

Personally I start with all tools in a single workflow, it’s the quickest way to get an agent running and everything shares the same context.
Once tasks become reusable or start eating RAM, I move them into sub-workflows and call them with `Execute Sub-workflow’.
On n8n Cloud a sub-workflow doesn’t cost an extra execution, so the choice is mostly about organisation and performance.
If you want to combine lots of different tasks, have a look at the Model Context Protocol (MCP).

Links

It lets you expose each sub-workflow as a standard tool that the agent can discover on the fly.

3 Likes

MCPs are all the rage now. The below video explains more and this method seems to declutter the agent node which keeps things simple and clean to maintain.

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.