Hey everyone! I wanted to share an open-source project I’ve been building called OpenPawz — a local-first AI desktop agent that uses n8n as its integration engine.
The Problem
I love n8n, but I wanted a way for a local AI agent to autonomously use all of n8n’s integrations without hardcoding every single API. If the agent needs to check Slack, create a Trello card, and send an email — it should just do it, using the integrations I’ve already set up.
The Solution: MCP Bridge → n8n
OpenPawz connects to n8n via the Model Context Protocol (MCP) over SSE. Here’s the flow:
-
The Architect (your main model — Claude, Gemini, GPT, or local via Ollama) decides what to do
-
When it needs an external service, it calls an
mcp_*tool -
The Foreman (a lightweight local model on Ollama — zero API cost) intercepts the call and routes it through the MCP bridge to your n8n instance
-
n8n executes it using its native nodes — with all your existing credentials and auth
The agent instantly gets access to every integration n8n supports. No new API keys to configure per-tool. No custom code. Your n8n credentials handle everything.
Why this matters for n8n users
-
Your n8n instance becomes an AI-accessible tool library — every workflow, every node, every credential you’ve set up is now available to an autonomous agent
-
Security stays local — keys never leave your n8n vault, the MCP bridge runs on
127.0.0.1, and the Foreman runs entirely on your machine -
Bidirectional — the agent can read from and write to any connected service. Chain operations across services in a single conversation
-
Zero cost for tool execution — the Foreman is a 7B model running locally on Ollama, so delegated MCP calls cost nothing
Quick Setup
If you already run n8n, you’re 90% there:
-
Enable MCP Server in your n8n instance (SSE transport on
http://127.0.0.1:5678/mcp/sse) -
Install OpenPawz and point it at your n8n endpoint
-
Your n8n integrations appear as tools the agent can call
Links
-
Website: openpawz.ai
-
GitHub: github.com/OpenPawz/openpawz
-
License: MIT — fully open source
The app is a Tauri v2 desktop app (Rust + TypeScript), runs on Mac/Windows/Linux, and supports any LLM provider (OpenAI, Anthropic, Google, Ollama, OpenRouter, etc.).
I’d love feedback from n8n power users on the architecture — especially around which n8n nodes/integrations you’d want an AI agent to access most. What workflows would you automate if an agent could just call them?