npm: npm install n8n-nodes-utcp-codemode
GitHub: mj-deving/n8n-nodes-utcp-codemode
Hey everyone,
I built a community node that changes how AI Agents handle multi-tool pipelines. Instead of making a separate LLM call for each tool, the agent writes a single TypeScript code block that runs all tools at once in an isolated sandbox.
Why?
Every tool call = another LLM round-trip carrying the full conversation history. With 5 tools, that’s 11 LLM calls with growing context each time. I benchmarked it:
| Traditional (5 tools) | Code-Mode (1 tool) | Savings | |
|---|---|---|---|
| LLM calls | 11 | 1 | 91% |
| Tokens | ~18,000 | ~700 | 96% |
| Execution time | 12.5s | 2.5s | 80% |
At scale with GPT-4o pricing: 1,000 executions/day saves ~$15,800/year.
How it works
Traditional: Agent → LLM → tool_1 → LLM → tool_2 → LLM → tool_3 → LLM → ...
Code-Mode: Agent → LLM → writes TypeScript → sandbox runs all tools → done
The agent gets a single execute_code_chain tool. It writes the complete pipeline as code, which executes in an isolated-vm V8 sandbox with access to your registered tools.
Install
cd ~/.n8n/nodes
npm install n8n-nodes-utcp-codemode
# Restart n8n
Then connect Code-Mode Tool (AI > Tools) to any AI Agent node.
Configuration
- Tool Sources — JSON array of UTCP configs (MCP servers, HTTP APIs)
- Timeout — Max execution time (default: 30s)
- Memory Limit — Max sandbox memory (default: 128MB)
Good to know
- Works best with Claude and GPT-4o. Gemini needs more explicit prompting to write code proactively.
- Shines at 3+ tools. Single-tool workflows won’t see a difference.
- Built on UTCP code-mode + isolated-vm for secure execution.
Would love feedback from anyone running multi-tool agents. Especially curious about results on longer pipelines or different LLMs.