Disable commercial LLM models

Our organization operates in a secured enterprise environment and plans to pilot a self-hosted instance of n8n to orchestrate internal AI agent workflows.
We need to ensure users are restricted from connecting to any external or commercial LLMs (e.g., OpenAI, Anthropic, etc.) and can only connect to our internal Agile-AI LLM hosted within our network.
Is there a supported way to lock down available AI model integrations or restrict outbound API calls so users can only access approved, internally hosted models?

Hi @Chad.Collins

Great question. We had a similar question for a project and thought this approach might help address the requirements.

Restrict Outbound Network Access

  • Block all internet access from the n8n host/container at the firewall or proxy level.

  • Only allow traffic to your internal Agile-AI endpoint.

  • This ensures that even if someone tries to use an HTTP Request node, they can’t reach external APIs.

Disable or Limit Use of External LLM Nodes

  • Block outbound traffic at the network level (firewall or proxy)

    • Preventing the n8n instance from reaching known LLM providers (like api.openai.com, anthropic.com, etc.).

    • We understand this works even if someone adds an HTTP Request node manually.

Use environment variables to control UI and behavior (limited scope)
While we are not aware of any env var to hide specific nodes, you could try:

Use Custom Nodes for Agile-AI

  • Create a custom n8n node (or use a templated HTTP Request node) that securely connects to your Agile-AI API.

  • This provides a controlled integration point for internal LLM access without exposing flexible endpoints.

(Optional) Use Local AI Tools Like Ollama

  • For testing or local workflows, tools like Ollama (self-hosted, offline LLMs) can also be deployed inside your network.

Hope this helps! Looking forward to hearing how others are approaching this kind of controlled AI integration in enterprise environments.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.