Chat and LLM Support for OpenAI Proxy Server?

The idea is:

I would love to create an n8n workflow that uses my OpenAI Proxy Server, which supports the same API as standard OpenAI, but is reachable from a different URL/endpoint.

In Flowise, this functionality is supported in their ChatLocalAI node type.

My use case:

If I’m going to use n8n workflows that leverage LLMs, I need to be able to add in load-balancing, failover, and recovery support. Rather than bake that logic into each/every n8n workflow, I’d rather like to have that logic centralized in the OpenAI Proxy Server for consistency.

I think it would be beneficial to add this because:

It allows users to create n8n workflows that use LLMs in a more resilient manner.

Any resources to support this?

Are you willing to work on this?

Yes. Happy to contribute testing/verification to this.

Update: This is already supported in n8n. Just use the standard OpenAI Chat Model and point it to your LiteLLM Proxy server.

2 Likes

Hi, I’ve tried to do that, but how should I authenticate? because LiteLLM uses Autorization header and the OpenAI Chat Model only allow to use OpenAI credentials format

LiteLLM supports the same credential format as OpenAI

Although litellm is openai-compatible, I can’t find ways to add metadata to openai node for litellm and langfuse’s logging

For example, we need a LiteLLM node so that we could pass in extra metadata we want to log into self-hosted litellm proxy and langfuse

Agreed. @bartv , is there any chance the devs working on the LLM nodes could add the ability to pass custom metadata through to the LLM provider? More and more LLM vendors are offering unique features that can only be enabled if we have the ability to pass custom metadata through from n8n.

Any chance we could get an update if this will be possible @bartv, I am using LiteLLM similar to google ADK. Any hosting an OpenAI style /chat/completions fastapi through google Apigee. Apigee requires me to pass certain headers for authentication when using the openai sdk to interact. Can we add the ability to pass metadata or headers to OpenAI model node?