Enforce Company-Wide LLM Proxy and Log Prompt/Response for All Users

The idea is:

We’re requesting a set of enterprise features to securely manage LLM usage (OpenAI, Azure, Claude) in n8n.

My use case:

  • Force LLM Proxy via baseURL Override
    Enforce a global baseURL (e.g. https://llmproxy.company.com/v1) for all OpenAI/Azure credentials. Prevent bypass by restricting users from entering their own.
  • Credential Creation Restrictions
    Only allow admins to create LLM credentials. Users must use centrally managed, proxy-enforced shared credentials.
  • Prompt & Response Logging
    Log the prompt and LLM response centrally for every AI call (like Lakera Guard or Lasso). Include user ID, workflow ID, and timestamp.
  • Force Security Node in Workflows
    Ability to enforce that every workflow using an LLM includes a predefined “Security Check” node before model interaction.

I think it would be beneficial to add this because:

These controls are essential for enterprise-grade AI governance — especially in environments where data loss prevention, compliance, and misuse detection are top priorities.
This would allow secure LLM adoption without relying on fragile workarounds.

Any resources to support this?

  • n8n Enterprise Docs (RBAC, Credentials)

Are you willing to work on this?

Yes, open to contributing ideas, test feedback, and design input.