API Key Proxy for AI Agents (OpenRouter / OpenAI-compatible)

I built a simple 2-node workflow that proxies OpenAI-compatible API requests to OpenRouter, keeping the API key off the calling machine. Sharing it in case others find it useful.

The problem: If you run local AI agents (LangChain, CrewAI, OpenClaw, etc.), the API key for your cloud LLM typically lives on the agent’s machine. If the agent is compromised via prompt
injection, that key can be exfiltrated.

The fix: Move the key to your n8n instance on a separate machine. The agent calls a webhook with a proxy token, n8n validates it and injects the real API key for the upstream call. The agent
never has the key — not on disk, not in memory.

Workflow: Webhook → HTTP Request. That’s it. The webhook validates a proxy auth token, the HTTP Request node forwards the body to OpenRouter with the real key. Uses responseMode: lastNode so the
upstream response passes straight back.

Tip: Create each credential from inside the node that uses it (not beforehand) — avoids confusion when both show as “Header Auth account” in the dropdown.

Works with any OpenAI-compatible API — just change the upstream URL.

GitHub (MIT licensed): GitHub - KeithBrodie/n8n-openrouter-proxy: n8n workflow that proxies OpenAI-compatible API requests to OpenRouter, keeping the API key off the AI agent host