Made an openai-compatible bridge for n8n workflows

I’ve been using n8n for a while, actually rolling it out at scale at my company, and wanted to use my agents in tools like Open WebUI and LibreChat without rebuilding everything. so I wrote a small bridge that makes n8n workflows look like OpenAI models.

basically it sits between any OpenAI-compatible client and your n8n webhooks and translates the API format. handles streaming and non-streaming responses, tracks sessions so my agents remember conversations, and lets me map multiple n8n workflows as different “models”.

why I built this: instead of building agents and automations in chat interfaces from scratch, I can keep using n8n’s workflow builder for all my logic (agents, tools, memory, whatever) and then just point Open WebUI, LibreChat, or any OpenAI API compatible tool at it. my n8n workflow gets the messages, does its thing, and sends back responses.

setup: pretty straightforward - map your n8n webhook URLs to model names in a json file, set a bearer token for auth, docker compose up. example workflow is included.

I tested it with:

  • Open WebUI
  • LibreChat
  • OpenAI API curls

repo: https://github.com/sveneisenschmidt/n8n-openai-bridge

if you run into issues enable LOG_REQUESTS=true to see what’s happening. not trying to replace anything, just found this useful for my homelab and figured others might want it too.

background: this actually started as a Python function for Open WebUI that I had working, but it felt too cumbersome and wasn’t easy to maintain. the extension approach meant dealing with Open WebUI’s pipeline system and keeping everything in sync. switching to a standalone bridge made everything simpler - now it’s just a standard API server that works with any OpenAI-compatible client, not just Open WebUI. You can find the Open WebUi pipeline here: GitHub - sveneisenschmidt/openwebui-n8n-function: Simplified and optimized n8n pipeline for Open WebUI. Stream responses from n8n workflows directly into your chats with session tracking. - I prefer the OpenAI bridge.

3 Likes

Hey, thanks a lot for sharing this, really appreciate you putting it out there..

Quick question though: since this is essentially bridging OpenAI-compatible, did you ever consider building it on top of LiteLLM?

Just wondering if you’ve come across it before or if there was a specific reason it wouldn’t fit this use case..

1 Like

Can you elaborate what on top if litellm means? At one point I had the bridge proxied via LiteLLM into OpenWebUi:

OpenWebUI → LiteLLM → n8n-openai-bridge → n8n.

Then I thought why not just set an alternative endpoint to the OpenAI connection in OpenWebUI and skip the LiteLLM middleman.

Is that what you have been referring to? If not happy to speak further.

Update: I never used the advanced features of LiteLLM, just the standard distribution that comes with a OpenWebUI starter kit I found online. It was the container image spec for docker compose bundled with OpenWebUI and a LiteLLM config with Antrophic and OpenAI examples.

1 Like

I mean adding n8n webhook as a litellm custom llm

Got it. Tell me if I get this wrong, but a custom integration is not needed as long as the OpenAI API specification is sufficient.

Are there LiteLLM features that would require more then what OpenAI-compatible APIs offer and have a matching functionality on n8n side? If so, I think it is not a challenge having the bridge support multiple „dialects“.

Hey, Thank you for sharing. I tried the bridge and it works well, both in OpenWebUI and LiteLLM!

I’m wondering if this can be adapted to work directly in N8N rather than being an external bridge.
Maybe through a custom Webhook node integration?
Someone had this exact same idea here: OpenAI-Compatible Webhook Node

Hello @ElhiK, thank you for testing it and the feedback. Anything you are missing or not working as expected?

I think a feature like this can be part of n8n itself. Eventually it is a dialect for any webhook node and with the public hosted and embedded chat interfaces there are non-traditional webhook use cases already in n8n’s core.

I am sceptical tho if n8n folks want this because:

  • there’s two specs: chat completion and response API, for proper support you want to support eventually also the response API (my bridge does not yet) → I think the n8n team already deals with this as their Agent nodes also support the new API, it just means more features in n8n need to support multiple approaches, this is maintenance effort
  • there’s even more use expectations for things to work out of the box that needs to be managed so people are well educated to not trigger support issue. Today openai models behave like single agent models towards the user, it is a 1-1 relationship. with n8n and an openai compatible endpoints you can put complex multi-agent setups that have less/more/equal features available than what openai supports with their agents. So you really need to know that if you make your workflow accessible via a openai wrapper it does not support magically all the openai features and it is up to the workflow builder to educate their end users (or themselves).

Having said that, we use it at roadsurfer with a pilot group of 100 people. We use Librechat and multiple workflows that are multi-agentic. It works well :slight_smile:

Thanks @sveneisenschmidt for the detailed answer. Maybe this could be just a community node rather an official n8n one .

Since you asked about feedback, the one issue I have currently is dealing with file attachments in a chat. (this is not specific to your bridge). OpenwebUI sends the files in a base64 encoded url field and in many cases it triggers a token error. You have to handle converting it back to a file binary in your workflow before sending it to an agent.

I had the same issue before with n8n pipe functions. Do you think this can be handled by your bridge?

Hi @ElhiK, I released a new version that includes a proper solution to file uploads. To keep backwards compatibility you need to set a new env var FILE_UPLOAD_MODE=extract-multipart but then n8n is able to pickup the binary data by default. No special handling inside the workflow is required for this.

Documentation is available here: n8n-openai-bridge/docs/CONFIGURATION.md at main · sveneisenschmidt/n8n-openai-bridge · GitHub

Let me know if it works for you and feel free to suggest other ideas.

1 Like

Amazing! Tested it and it works well.
This is really the best integration between n8n and OpenWebUI!

@ElhiK Love it. Please spread the news, I am looking for more adopters.