Hey n8n devs!
Just launched something new you might like — n8n-nodes-openguardrails — a community node that helps you keep your AI workflows safe and clean with built-in moderation and prompt protection.
What’s OpenGuardrails?
OpenGuardrails is an open-source toolkit for AI safety.
It helps you catch things like:
-
Prompt attacks (jailbreaks, prompt injections, etc.)
-
Unsafe content (violence, hate, adult content, etc.)
-
Data leaks (PII, trade secrets, IP)
Basically — it’s here to make sure your LLM workflows behave nicely.
Why You’ll Care
If you’re using ChatGPT, Claude, or any LLM in n8n, you’ve probably run into one of these:
-
Someone tries to trick your AI with a prompt injection
-
The AI spits out something it shouldn’t
-
Sensitive info leaks in a response
OpenGuardrails checks all that for you — both inputs and outputs — and keeps your workflows safe.
Quick Example
Here’s a simple setup to make your chatbot unbreakable:
1. Webhook (get user input)
↓
2. OpenGuardrails - Input Moderation
↓
3. IF node → (action = "pass")
→ YES: send to AI
→ NO: return safe response
↓
4. AI Response
↓
5. OpenGuardrails - Output Moderation
↓
6. IF node → (action = "pass")
→ YES: return to user
→ NO: use safe alt
Now your bot checks both ends automatically — easy win.
What You Can Do With It
-
Build safer chatbots (no prompt hacks or unsafe replies)
-
Moderate user-generated content from Slack, Discord, etc.
-
Keep writing assistants or translation bots compliant
-
Protect multi-language conversations (supports 119 languages!)
Setup
1. Install
Settings → Community Nodes → Install
Type: n8n-nodes-openguardrails
Or with Docker:
environment:
- N8N_COMMUNITY_PACKAGES=n8n-nodes-openguardrails
2. Get an API key
Grab a free one here https://api.openguardrails.com
(Or self-host it — it’s open source!)
Then just add it as a credential in n8n:
Credentials → New → OpenGuardrails API → paste key → Save
Highlights
-
4 operations: Check Content / Input / Output / Conversation
-
4 risk levels: none / low / medium / high
-
Configurable actions: continue, stop, or replace
-
Optional tracking: handle repeat offenders
-
Batch support: works with n8n’s multi-item mode
Under the Hood
-
Model:
OpenGuardrails-Text-2510(3.3B params) -
119 languages
-
Free plan: 10 req/sec, 10000/month
-
Open source (Apache 2.0)
-
Self-hosting supported
Example Output
{
"action": "reject",
"risk_level": "high",
"categories": ["S9"],
"suggest_answer": "I can’t help with that request.",
"was_replaced": true
}
Use it to:
-
route logic (with IF nodes)
-
log risky content
-
warn users
-
block or sanitize text
Self-Hosting
Prefer on-prem or private setup?
git clone https://github.com/openguardrails/openguardrails
docker compose up -d
Then point your n8n node to your local API.
Perfect if you care about privacy or want custom tuning.
Links
-
Docs: openguardrails.com/docs
-
Model: Hugging Face
-
Platform: github.com/openguardrails/openguardrails
Join the Conversation
Would love to hear:
-
How you’re handling AI safety in your workflows
-
What kind of checks you’d like to see
-
Any feature ideas or use cases
Drop a comment — or open an issue on GitHub.
TL;DR:
Free, open-source node for AI safety on n8n.
Blocks prompt attacks, unsafe content, and data leaks.
Easy setup, flexible options, developer-friendly.
Install now: n8n-nodes-openguardrails
Stay safe & keep building cool stuff!