Hey everyone!
I've been using n8n for automation projects and kept running into the same problem: AI prompts scattered across dozens of workflows.
Every time I wanted to tweak a prompt, I had to:
- Find which workflow(s) use it
- Update each one manually
- Hope I didn't break anything
- No way to A/B test different versions
So I built xR2 — a centralized prompt management platform with a native n8n community node.
How it works with n8n:
1. Create your prompt in xR2 with variables like {customer_name}, {issue}
2. In n8n, add the xR2 node → Get Prompt action
3. Fill in variables right in the node UI (Variable Values section) — supports n8n expressions like {{ $json.customer_name }}
4. Get the fully rendered prompt back — ready to pass to OpenAI/Claude/etc.
No Code node needed for variable substitution — the xR2 node handles it natively. If you skip a variable, it falls back to the default value from your prompt definition.
What you get:
- Change prompts instantly without touching workflows
- Version control (draft → testing → production)
- A/B test different prompts to see which converts better
- Track events (did the user complete signup after seeing this prompt?)
- Variables rendered server-side — no extra nodes in your workflow
Example workflow:
Webhook → xR2 (Get Prompt with variables) → OpenAI → Send Email → xR2 (Track Event)
Install via: Settings → Community Nodes → search "n8n-nodes-xr2"
Links:
- Website: https://xr2.uk
- Docs: https://docs.xr2.uk/sdks/n8n/
- n8n node: https://www.npmjs.com/package/n8n-nodes-xr2
Happy to answer any questions.
Pretty cool and looks like a lot of work went into this. Hoped it was a free share, but we all have to eat. Looks good, but you may want to change the pricing a bit. IMO $20/m is a steep ask.
Thanks for checking it out!
There’s a free tier that covers up to 100 prompt requests/month — enough to get started and test things out. But I hear you on the $20/month for the next step.
Curious — what would feel fair for you? And what’s your use case — how many workflows are you running with AI prompts?
Straight up. I’m not a potential client. I was expecting this to be a free workflow. I can probably build this myself if need be.
I would re-frame how you’re going about selling this. Normal non tech people will never get to the point of needing this. Techy people could build it themselves with time. I would pivot to an enterprise model with integration and support.
I could see this being a tool in some huge ai pipeline or something. An enterprise company running 100s of prompts has a much bigger need than a dev running maybe 5.
Something along the lines of your original pitch but with added support, maintenance, and integrating into existing codebases and software. IMO an enterprise company would be willing to pay if they know you will maintain and support it, they’d be much more willing to include it in their pipeline vs random a fly by night ai saas company. Just my 1c.
Here’s a walkthrough showing how it works in practice
https://www.youtube.com/watch?v=k5eP2R-5T84
the scaling problem with distributed prompts is real — past ~10 AI workflows it turns into maintenance hell. the discussion with nembdev about pricing hits a nerve though: anyone complex enough to need centralized prompt management often ends up building it themselves. what’s your experience with the free tier — how many workflows can you realistically cover with 100 requests/month?
the 100 requests/month figure is outdated — free tier is now 1000/month, which covers most solo builders and small teams comfortably.
on the “build it yourself” point — fair, some people do. but the xR2 approach saves the infra work: versioning, rollback, the n8n node is already there. at some point building your own prompt store stops being free when you factor in the time.
1000/month changes the math significantly — that’s comfortable for most small teams. the self-build argument is real but so is the maintenance point: homebuilt prompt stores rot pretty fast when the person who built it leaves or a “temporary” json file on a server becomes load-bearing infrastructure. what does rollback look like in xR2 when a prompt change breaks something in production — is it instant or is there a propagation delay?
rollback is instant and manual — each prompt has versions (draft → testing → production), you flip which version is active with one click and all workflows picking up that slug get the updated version immediately, no propagation delay.
you won’t get automatic detection if a prompt change breaks something — that part is on you to catch. but the switch itself takes seconds vs having to dig through workflows and undo edits manually.
longer term the plan is to tie version switching to metrics — e.g. if conversion drops below a threshold, auto-rollback to the previous version. not there yet but it’s the obvious next step.
instant manual rollback makes sense for v1 — you want a human decision when something breaks in prod anyway.
the metric-tied approach is the tricky part: what counts as “degradation” varies a lot by use case. conversion rate works for sales prompts, but for a support agent it might be escalation rate or resolution time. curious whether you’re thinking per-prompt metric configs or something more generic — because a one-size-fits-all threshold would be hard to tune.
the scaling problem you described hits hard once you pass ~10 AI workflows. the free tier (100/month) is realistic enough to test, but the real question isn’t pricing — it’s whether your prompts change frequently enough to justify external management. our experience: if you’re A/B testing or iterating on tone/rules regularly, centralized versioning pays for itself in reduced debugging time alone. what’s your main use case — are you testing variants or managing static prompts across many workflows?