Something I figured out the hard way after months of debugging across Lark, OpenAI, and internal APIs.
The problem isn’t hardcoding. It’s the execution boundary.
Most advice about n8n credential safety focuses on “don’t hardcode secrets.” That’s correct but incomplete. The deeper issue: n8n treats credentials as data that flows through the execution graph, which means they surface in:
- Execution logs (every input/output is recorded)
- Exported workflow JSON (n8n’s built-in Credential store helps, but HTTP Request nodes with manually-set headers don’t)
- Any AI tool you give the workflow to for editing
The pattern I kept hitting: a workflow would be clean at the node level, then a teammate would add a “quick test” HTTP node with a pasted token, check it in, and now the repo had a live credential in JSON.
The structural fix: push credentials behind an execution boundary
Instead of having n8n hold and pass credentials, route all outbound calls through a proxy that holds credentials separately. n8n only needs one rotating token to talk to the proxy. The proxy injects real credentials on the wire.
n8n → proxy (proxy_token) → Lark API (real tenant_access_token)
→ OpenAI (real sk-...)
What this changes:
- n8n’s execution logs only show the proxy token, never the real credentials
- Exported workflow JSON contains credential
id/namereferences (if you use n8n’s native Credential store for the proxy token), not real secrets - You can hand the workflow JSON to an AI coding tool safely
- Token refresh, caching, and rotation happen at the proxy layer — n8n doesn’t know or care
The part people miss
The proxy token itself can still leak if you paste it directly into an HTTP Request node header. The correct pattern is to store it as an n8n Header Auth Credential and reference it by name. Then the JSON only carries the credential’s id, not the value.
This sounds obvious but I’ve seen it missed repeatedly — including by myself on the first attempt.
Trade-offs worth knowing
- You’re adding a network hop. For approval-class workflows (latency-insensitive), this is imperceptible. For high-frequency data pipelines, benchmark first.
- You’re centralizing trust. The proxy becomes a single point of failure and a single point of audit. Both can be good or bad depending on your setup.
- The proxy needs monitoring. If it goes down, every workflow that depends on it stops. n8n’s built-in error handling won’t distinguish “API returned 403” from “proxy is unreachable” without explicit checks.
Running this pattern across finance workflows (invoice OCR, Lark approvals, reconciliation). Happy to compare notes if you’re doing something similar.