Hey everyone! When I posted n8n-claw about two weeks ago, I honestly wasn’t sure how it would be received. The response has been amazing! Thank you so much for the stars, forks, and feedback. It really motivated me to keep pushing.
Short introduction:
So here’s what’s been added since the original post:
What’s new (v0.12.0)
Expert Agents:
The biggest addition so far. n8n-claw can now delegate complex tasks to specialized sub-agents — each with their own persona, tool access, and independent Claude instance:
Expert Agent system for multi-agent task delegation
3 default experts included: Research Expert, Content Creator, Data Analyst
Agent Library Manager. Install or remove expert agents from a catalog
MCP Skills:
Install pre-built skills or build new API integrations on demand
Lot of skills in the skills libary, like mail, CalDav, Notion, Todoist, Transport, etc.
Other features:
Telegram chat: talk to your AI agent directly via Telegram
Long-term memory: remembers conversations and important context with optional semantic search (RAG)
Task management: create, track, and complete tasks with priorities and due dates
Proactive heartbeat: automatically reminds you of overdue/urgent tasks
Morning briefing: daily summary of your tasks at a time you choose
Smart reminders: timed Telegram reminders (“remind me in 2 hours to…”)
Scheduled actions: the agent executes instructions at a set time (“search HN for AI news at 9am”)
Web search: searches the web via built-in SearXNG instance (no API key needed)
Web reader: reads webpages as clean markdown via Crawl4AI (JS rendering, no boilerplate)
Project memory: persistent markdown documents for tracking ongoing work across conversations
There’s still a ton of potential here and I’m genuinely just one person working on this. If you’ve been thinking about contributing, testing, or just poking around: Now is a great time.
The more people experiment with it and share feedback, the better this thing gets. The goal is still the same: an autonomous AI agent built in n8n that even non-programmers can understand, set up, and extend. I think we’re getting closer!
Would love to hear what features you’d want to see next!
Really impressive evolution here! The Expert Agents system is exactly what people building autonomous AI workflows need — delegating to specialized sub-agents instead of monolithic mega-prompts scales so much better. The MCP Skills integration + Telegram chat + long-term memory combination is particularly solid for real production use. How’s the performance on the semantic search for memory retrieval? That’s usually the bottleneck when scaling memory systems.
You’re right that memory retrieval is critical. Currently the semantic search uses pgvector with embeddings stored directly in Postgres, accessed via PostgREST (self-hosted Supabase stack). No external vector DB needed. For a single-user agent that’s plenty fast (sub-100ms), but I could see it becoming a bottleneck with thousands of memory entries or multi-user setups.
The memory system has two layers: memory_daily captures everything from the day, then a nightly consolidation workflow summarizes that into memory_long with category tags and importance scores. That keeps the long-term memory table lean and the search relevant. The agent isn’t digging through raw chat logs.
For the Expert Agents: the key insight was keeping ONE runner workflow for all personas instead of duplicating workflows per agent. The persona gets loaded from the DB at runtime, so adding a new expert is just an INSERT. No workflow changes needed.
Best, Friedemann
The two-layer design makes a lot of sense — daily capture with nightly consolidation keeps search quality stable as entries accumulate rather than degrading over time. The single runner for all personas is the right call, data-driven beats workflow-per-agent once you have more than a handful of agents. For multi-user down the road, user_id namespacing in the vector queries is probably the first thing to add before that becomes a bottleneck.
I didn’t like the way openclaw works and thought AI driven core with deterministic workflows would be way better, so I started building my own claw-like with n8n… AND then discover that someone else already beat me to it weeks ago and way more advanced and mature too.
People are all hyping openclaw up, even big tech. But in practice I think we all found we eventually arrive at deterministic workflows for reliability and efficiency. THIS HERE I think is the better way forward at least for the time being. Thank you for making this!