The idea is:
Portkey is an AI gateway that provides enhanced control, observability, and cost optimization for AI models. Currently, Portkey can be used inside n8n by overriding the Base URL in the OpenAI Chat Model node. However, this process is not straightforward, as it involves configuring a Virtual Key, creating a corresponding Config, and linking it to an API Key. Additionally, n8n displays errors during setup because it tries to validate the API Key against OpenAI’s platform, which can be misleading.
The idea is to integrate Portkey as a native option in n8n’s Chat Model or Language Model nodes, making it easier for users to leverage Portkey’s capabilities without these workarounds.
My use case:
I use Portkey to manage AI model requests efficiently, but setting it up inside n8n required extra steps and debugging due to the way n8n handles OpenAI API validation. Having a native integration would simplify the process and make it more accessible to other users facing similar challenges.
I think it would be beneficial to add this because:
- It eliminates the confusion caused by n8n’s OpenAI validation when setting up Portkey.
- It makes it easier for users to switch to Portkey without needing workarounds.
- Portkey provides useful features like fallback models and cost optimization, which could benefit n8n users.
Any resources to support this?
Are you willing to work on this?
I don’t have the knowledge to help, but I believe that Portkey itself is interested in doing this integration.