One workflow, multiple AI prompts — dynamic prompt switching with xR2

Hey everyone!

A common question I see on the forum: “I have the same workflow but need different system prompts for different clients/scenarios. How do I avoid duplicating the whole workflow?”

Built a template that solves this. One workflow handles unlimited clients — each with their own AI personality, tone, and instructions. No duplicating workflows, no giant Switch nodes with blocks of text.

How it works:

Each prompt lives in xR2 (prompt management platform) with a unique slug. The workflow receives a request with a prompt_slug parameter, fetches the matching prompt from xR2, sends it to OpenAI, and returns the response.

  • support-acme → friendly support agent for ACME Corp
  • support-globex → formal assistant for Globex Industries
  • sales-bot → sales qualification bot

Same workflow. Different AI behavior. Change the prompt in xR2 dashboard — no need to touch n8n.

Workflow JSON (copy and import via Import from File):

One workflow, multiple AI prompts — dynamic prompt switching with xR2.json (6.0 KB)

Setup:

  1. Install xR2 node: Settings → Community Nodes → n8n-nodes-xr2
  2. Get free API key at xr2.uk
  3. Create prompts with slugs matching your use cases
  4. Import this workflow, add your credentials (xR2 + OpenAI)
  5. Test with: POST to webhook URL with body:
    {“prompt_slug”: “support-acme”, “customer_name”: “John”, “message”: “How to reset password?”}

Video walkthrough: link

Happy to answer questions!

nice template! externalizing prompts instead of hardcoding them in the workflow is one of those patterns that saves so much duplication in practice. one thing i added to a similar setup: a local prompt cache as fallback in case the external platform is unreachable — just a json file on the server that syncs every 30 min. also worth considering: making the model configurable per slug too, not just the prompt. high-priority tickets go through a bigger model, standard stuff through something cheaper. have you published the template on github by any chance?

1 Like

thanks! the local cache fallback is solid, will think about adding that.

model-per-slug is a good idea — right now xR2 stores prompts with slug and version only, model config isn’t a field yet. worth adding to the roadmap though.

no github yet, just the json attached. happy to put it up properly if there’s demand

the github request is real demand — json attachments in forum posts are hard to version and diff. even a simple repo would make it easier to track changes over time. for the model-per-slug question: one lightweight workaround without needing xR2 to store model config is a small mapping in n8n itself — slug prefix → model name. so “premium-" slugs route to the bigger model and "standard-” to something cheaper. not as clean as having it in xR2 but works fine until the feature ships.