Why your n8n workflow JSON is leaking credentials — and the architectural fix

Something I figured out the hard way after months of debugging across Lark, OpenAI, and internal APIs.


The problem isn’t hardcoding. It’s the execution boundary.

Most advice about n8n credential safety focuses on “don’t hardcode secrets.” That’s correct but incomplete. The deeper issue: n8n treats credentials as data that flows through the execution graph, which means they surface in:

  • Execution logs (every input/output is recorded)
  • Exported workflow JSON (n8n’s built-in Credential store helps, but HTTP Request nodes with manually-set headers don’t)
  • Any AI tool you give the workflow to for editing

The pattern I kept hitting: a workflow would be clean at the node level, then a teammate would add a “quick test” HTTP node with a pasted token, check it in, and now the repo had a live credential in JSON.

The structural fix: push credentials behind an execution boundary

Instead of having n8n hold and pass credentials, route all outbound calls through a proxy that holds credentials separately. n8n only needs one rotating token to talk to the proxy. The proxy injects real credentials on the wire.

n8n → proxy (proxy_token) → Lark API (real tenant_access_token)
                           → OpenAI  (real sk-...)

What this changes:

  • n8n’s execution logs only show the proxy token, never the real credentials
  • Exported workflow JSON contains credential id/name references (if you use n8n’s native Credential store for the proxy token), not real secrets
  • You can hand the workflow JSON to an AI coding tool safely
  • Token refresh, caching, and rotation happen at the proxy layer — n8n doesn’t know or care

The part people miss

The proxy token itself can still leak if you paste it directly into an HTTP Request node header. The correct pattern is to store it as an n8n Header Auth Credential and reference it by name. Then the JSON only carries the credential’s id, not the value.

This sounds obvious but I’ve seen it missed repeatedly — including by myself on the first attempt.

Trade-offs worth knowing

  • You’re adding a network hop. For approval-class workflows (latency-insensitive), this is imperceptible. For high-frequency data pipelines, benchmark first.
  • You’re centralizing trust. The proxy becomes a single point of failure and a single point of audit. Both can be good or bad depending on your setup.
  • The proxy needs monitoring. If it goes down, every workflow that depends on it stops. n8n’s built-in error handling won’t distinguish “API returned 403” from “proxy is unreachable” without explicit checks.

Running this pattern across finance workflows (invoice OCR, Lark approvals, reconciliation). Happy to compare notes if you’re doing something similar.

2 Likes

I’ve been tackling a similar issue from a different angle, using n8n’s external secrets store (Vault/AWS Secrets Manager) so credentials never even hit the n8n database. It’s a different tradeoff: no extra network hop, but n8n’s uptime becomes tied to your secrets backend. Your proxy approach actually handles that failure isolation better. Hadn’t looked at it from that perspective before.

1 Like

That makes sense — Secret Store feels like a very good fit if the goal is to keep things clean while staying inside n8n’s native model.

What pushed me toward the proxy route was slightly different: I wanted n8n to never touch the real credential at all, not just avoid storing it. With Secret Store, n8n still has to retrieve and use the secret at runtime. With the proxy pattern, that boundary moves fully outside the workflow layer.

A nice side effect for me was that token-lifecycle quirks (like Lark’s 2-hour refresh behavior) also get absorbed by the proxy instead of leaking into workflow logic.

So yeah — less “one is better,” more “they optimize for different trust boundaries.”

The “single stable token no matter what’s going on underneath” framing is probably the clearest way to explain it. That’s also the part that’s surprisingly hard to get across until someone has actually had a token expire halfway through a workflow.

The pre-flight health check before critical runs also feels like a really solid pattern. I’ve mostly been relying on async monitoring, but checking at invocation time gives you a much cleaner failure mode — you know something is wrong before the workflow starts, instead of finding out in the middle of execution.