mTLS client certificate support for OpenAI credential (covers OpenAI Enterprise mTLS Beta + self-hosted auth proxies

I’d like to propose adding mutual TLS (mTLS) client certificate support to n8n’s
existing OpenAI credential type, and I’m volunteering to implement the PR if the
team thinks it’s worth pursuing.

Two concrete use cases driving this

1. OpenAI’s native Enterprise mTLS Beta

OpenAI currently operates a Mutual TLS Beta Program for enterprise API customers.
When enabled, API requests must go to https://mtls.api.openai.com and present
both a valid API key and a signed x509 client certificate — requests
without the cert are rejected at the network layer.

Enterprise orgs activate this via the Certificates API
(POST /v1/organization/certificates), and once active it applies to all API
traffic for that project. This is an increasingly common enterprise security
requirement alongside OpenAI’s EU Data Residency offering.

Today, these users cannot use n8n’s OpenAI nodes at all. There is nowhere
in the credential to supply a client certificate.

2. Self-hosted AI behind mTLS auth proxies

Many enterprises run OpenAI-compatible inference (Ollama, vLLM, LiteLLM, Azure
OpenAI via API Management) behind a reverse proxy (nginx, Envoy, Istio) that
enforces mTLS at the perimeter. The AI service is not reachable at all without
presenting a valid client certificate signed by the organisation’s CA.

This pattern is standard in financial services, defence, and healthcare where
network-layer authentication is required in addition to application-layer auth.

What the change would look like

The change is additive — no breaking changes to existing credentials or nodes.

To OpenAiApi.credentials.ts: add optional fields (all password: true):

  • CA Certificate — PEM-encoded CA cert for server verification
  • Client Certificate — PEM-encoded client cert
  • Client Key — PEM-encoded client private key
  • Passphrase — optional key passphrase

To the LangChain node layer (ChatOpenAI, OpenAIEmbeddings): when cert
fields are present, construct an https.Agent and pass it via
configuration.httpAgent in ClientOptions. When absent, behaviour is
identical to today.

The credential test would need to use native https.request() with the agent
attached (standard ICredentialTestRequest cannot attach a custom agent —
this is a known limitation, already handled in the community node reference impl).

Reference implementation

I built and published a working community node that implements exactly this
pattern: n8n-nodes-mtls-openai
(source).

It has been tested end-to-end against:

  • nginx mTLS reverse proxy fronting Ollama (ssl_client_verify="SUCCESS"
    confirmed in logs)
  • The PEM normalisation edge case (credentials UI strips newlines from
    multi-line strings)

The community node cannot be verified under current policy because it
requires https and a peer dependency — which is exactly why this belongs
in core rather than as a community node.

Happy to implement

I’m familiar with n8n’s node development patterns and the specific gotchas
involved (PEM normalisation, testedBy vs ICredentialTestRequest, agent
injection into LangChain client options). If the team thinks this is worth
pursuing, I’m happy to open a PR against the guidelines you’d like to see.

Relevant questions I’d want guidance on before starting:

  • Should cert fields live on OpenAiApi directly, or on a new credential
    type that extends it?
  • Is there an n8n-side abstraction for custom https.Agent injection that
    I should use instead of constructing one directly?
  • Are there existing tests for credential types I should model against?

Thanks for considering it.

This is an exceptionally well-structured feature request. The two use cases (OpenAI Enterprise mTLS + self-hosted proxies) cover a real pain point for enterprise users, and your community node reference implementation validates the entire approach. The PEM normalisation edge case handling shows deep attention to detail. Given that you’ve already proven it works and you’re volunteering to implement, this is a strong case for core. Clear contribution!

PR is up: https://github.com/n8n-io/n8n/pull/27309

Draft for now while CI settles, but the implementation is complete — all three OpenAI code paths covered (Chat Model, Embeddings, vendor node), tests included. Happy to address any reviewer feedback on the approach.

@Benjamin_Behrens PR #27309 has been open since 20 March with only a bot reviewer and I wanted to flag it for triage.

The reason this needs to be in core rather than a community node is due to the peer dependency limitations on community.

The PR is ready, happy to rebase against current master and iterate on any feedback.