Integrating with Portkey via Chat Model/Language Model

The idea is:

Portkey is an AI gateway that provides enhanced control, observability, and cost optimization for AI models. Currently, Portkey can be used inside n8n by overriding the Base URL in the OpenAI Chat Model node. However, this process is not straightforward, as it involves configuring a Virtual Key, creating a corresponding Config, and linking it to an API Key. Additionally, n8n displays errors during setup because it tries to validate the API Key against OpenAI’s platform, which can be misleading.

The idea is to integrate Portkey as a native option in n8n’s Chat Model or Language Model nodes, making it easier for users to leverage Portkey’s capabilities without these workarounds.

My use case:

I use Portkey to manage AI model requests efficiently, but setting it up inside n8n required extra steps and debugging due to the way n8n handles OpenAI API validation. Having a native integration would simplify the process and make it more accessible to other users facing similar challenges.

I think it would be beneficial to add this because:

  • It eliminates the confusion caused by n8n’s OpenAI validation when setting up Portkey.
  • It makes it easier for users to switch to Portkey without needing workarounds.
  • Portkey provides useful features like fallback models and cost optimization, which could benefit n8n users.

Any resources to support this?

Are you willing to work on this?

I don’t have the knowledge to help, but I believe that Portkey itself is interested in doing this integration.

Wanted to nudge this with a comment, this tool could become instrumental in troubleshooting and safely using any number of LLMs by streamlining commonly used guardrails. It’s like an enhanced version of OpenRouter in my opinion.

@souzagaabriel it sounds like you were able to get it to work by overriding the openai chat model. I am trying to do the same but struggling to get it to work. Can you share any details on how you did it?

Hey everyone! :wave:

I’m from the Portkey team

we recently released our n8n integration with portkey:

[n8n - Portkey Docs]

Quick setup: Override the OpenAI node’s base URL to https://api.portkey.ai/v1 and add your Portkey config in the headers. The validation errors are annoying but harmless - just ignore them.

What you’ll get with Portkey:

  • Automatic fallbacks (never get stuck if one model is down)
  • Real cost tracking across all your AI calls
  • Built-in guardrails to catch hallucinations
  • Switch between 1600+ models without changing your workflows
  • add governance to your n8n workflow

The native integration will make all this even better with direct observability in n8n.