Chat and LLM Support for OpenAI Proxy Server?

The idea is:

I would love to create an n8n workflow that uses my OpenAI Proxy Server, which supports the same API as standard OpenAI, but is reachable from a different URL/endpoint.

In Flowise, this functionality is supported in their ChatLocalAI node type.

My use case:

If I’m going to use n8n workflows that leverage LLMs, I need to be able to add in load-balancing, failover, and recovery support. Rather than bake that logic into each/every n8n workflow, I’d rather like to have that logic centralized in the OpenAI Proxy Server for consistency.

I think it would be beneficial to add this because:

It allows users to create n8n workflows that use LLMs in a more resilient manner.

Any resources to support this?

Are you willing to work on this?

Yes. Happy to contribute testing/verification to this.

Update: This is already supported in n8n. Just use the standard OpenAI Chat Model and point it to your LiteLLM Proxy server.