OpenPawz: Connecting local AI agents to 25k+ tools via n8n's MCP bridge.

Hey everyone! I wanted to share an open-source project I’ve been building called OpenPawz — a local-first AI desktop agent that uses n8n as its integration engine.

The Problem

I love n8n, but I wanted a way for a local AI agent to autonomously use all of n8n’s integrations without hardcoding every single API. If the agent needs to check Slack, create a Trello card, and send an email — it should just do it, using the integrations I’ve already set up.

The Solution: MCP Bridge → n8n

OpenPawz connects to n8n via the Model Context Protocol (MCP) over SSE. Here’s the flow:

  1. The Architect (your main model — Claude, Gemini, GPT, or local via Ollama) decides what to do

  2. When it needs an external service, it calls an mcp_* tool

  3. The Foreman (a lightweight local model on Ollama — zero API cost) intercepts the call and routes it through the MCP bridge to your n8n instance

  4. n8n executes it using its native nodes — with all your existing credentials and auth

The agent instantly gets access to every integration n8n supports. No new API keys to configure per-tool. No custom code. Your n8n credentials handle everything.

Why this matters for n8n users

  • Your n8n instance becomes an AI-accessible tool library — every workflow, every node, every credential you’ve set up is now available to an autonomous agent

  • Security stays local — keys never leave your n8n vault, the MCP bridge runs on 127.0.0.1, and the Foreman runs entirely on your machine

  • Bidirectional — the agent can read from and write to any connected service. Chain operations across services in a single conversation

  • Zero cost for tool execution — the Foreman is a 7B model running locally on Ollama, so delegated MCP calls cost nothing

Quick Setup

If you already run n8n, you’re 90% there:

  1. Enable MCP Server in your n8n instance (SSE transport on http://127.0.0.1:5678/mcp/sse)

  2. Install OpenPawz and point it at your n8n endpoint

  3. Your n8n integrations appear as tools the agent can call

Links

The app is a Tauri v2 desktop app (Rust + TypeScript), runs on Mac/Windows/Linux, and supports any LLM provider (OpenAI, Anthropic, Google, Ollama, OpenRouter, etc.).

I’d love feedback from n8n power users on the architecture — especially around which n8n nodes/integrations you’d want an AI agent to access most. What workflows would you automate if an agent could just call them?

cool project! using n8n as the tool backend for a local agent via mcp makes a lot of sense — basically getting all the integrations for free without building custom connectors. curious about two things: how do you handle auth scoping? like if the agent can hit any workflow, is there a way to restrict which ones are exposed via mcp? and have you seen any stability issues with sse for longer-running operations? i’ve had connections drop on anything taking more than 30-40 seconds.

Nice architecture, using n8n as the MCP backend is a smart move. Gets you all the integrations without reinventing auth for every service.

On Benjamin’s two questions, which are worth addressing directly:

Auth scoping: n8n doesn’t have workflow-level MCP exposure controls built in yet. The practical workaround is to create a dedicated n8n user or API key that only has access to specific workflows, then route your MCP bridge through that credential. You can also add an allow-list check at the workflow trigger level, an early filter node that compares the incoming workflow name/ID against a list you define, and returns early if it’s not on the list. Crude but it works until n8n ships proper scoping.

SSE stability for long-running ops: The drop-at-30-40s thing is almost always a proxy timeout, not n8n itself. If you’re running behind nginx or a reverse proxy, the default proxy_read_timeout is 60s. For SSE you want:

proxy_read_timeout 3600;
proxy_buffering off;
proxy_cache off;

If the agent is connecting directly (no proxy), it’s usually the Node.js keep-alive or a network idle timeout. For operations expected to run longer than 30s, the pattern I’ve found most reliable is: trigger the n8n workflow via webhook, get back a job ID immediately, then poll a separate endpoint for status. Avoids SSE timeout issues entirely and makes the system more resilient to reconnects.

Cool project, watching this one.