I’d like to propose adding mutual TLS (mTLS) client certificate support to n8n’s
existing OpenAI credential type, and I’m volunteering to implement the PR if the
team thinks it’s worth pursuing.
Two concrete use cases driving this
1. OpenAI’s native Enterprise mTLS Beta
OpenAI currently operates a Mutual TLS Beta Program for enterprise API customers.
When enabled, API requests must go to https://mtls.api.openai.com and present
both a valid API key and a signed x509 client certificate — requests
without the cert are rejected at the network layer.
Enterprise orgs activate this via the Certificates API
(POST /v1/organization/certificates), and once active it applies to all API
traffic for that project. This is an increasingly common enterprise security
requirement alongside OpenAI’s EU Data Residency offering.
Today, these users cannot use n8n’s OpenAI nodes at all. There is nowhere
in the credential to supply a client certificate.
2. Self-hosted AI behind mTLS auth proxies
Many enterprises run OpenAI-compatible inference (Ollama, vLLM, LiteLLM, Azure
OpenAI via API Management) behind a reverse proxy (nginx, Envoy, Istio) that
enforces mTLS at the perimeter. The AI service is not reachable at all without
presenting a valid client certificate signed by the organisation’s CA.
This pattern is standard in financial services, defence, and healthcare where
network-layer authentication is required in addition to application-layer auth.
What the change would look like
The change is additive — no breaking changes to existing credentials or nodes.
To OpenAiApi.credentials.ts: add optional fields (all password: true):
CA Certificate— PEM-encoded CA cert for server verificationClient Certificate— PEM-encoded client certClient Key— PEM-encoded client private keyPassphrase— optional key passphrase
To the LangChain node layer (ChatOpenAI, OpenAIEmbeddings): when cert
fields are present, construct an https.Agent and pass it via
configuration.httpAgent in ClientOptions. When absent, behaviour is
identical to today.
The credential test would need to use native https.request() with the agent
attached (standard ICredentialTestRequest cannot attach a custom agent —
this is a known limitation, already handled in the community node reference impl).
Reference implementation
I built and published a working community node that implements exactly this
pattern: n8n-nodes-mtls-openai
(source).
It has been tested end-to-end against:
- nginx mTLS reverse proxy fronting Ollama (
ssl_client_verify="SUCCESS"
confirmed in logs) - The PEM normalisation edge case (credentials UI strips newlines from
multi-line strings)
The community node cannot be verified under current policy because it
requires https and a peer dependency — which is exactly why this belongs
in core rather than as a community node.
Happy to implement
I’m familiar with n8n’s node development patterns and the specific gotchas
involved (PEM normalisation, testedBy vs ICredentialTestRequest, agent
injection into LangChain client options). If the team thinks this is worth
pursuing, I’m happy to open a PR against the guidelines you’d like to see.
Relevant questions I’d want guidance on before starting:
- Should cert fields live on
OpenAiApidirectly, or on a new credential
type that extends it? - Is there an n8n-side abstraction for custom
https.Agentinjection that
I should use instead of constructing one directly? - Are there existing tests for credential types I should model against?
Thanks for considering it.