I’ve been using n8n for a while, actually rolling it out at scale at my company, and wanted to use my agents in tools like Open WebUI and LibreChat without rebuilding everything. so I wrote a small bridge that makes n8n workflows look like OpenAI models.
basically it sits between any OpenAI-compatible client and your n8n webhooks and translates the API format. handles streaming and non-streaming responses, tracks sessions so my agents remember conversations, and lets me map multiple n8n workflows as different “models”.
why I built this: instead of building agents and automations in chat interfaces from scratch, I can keep using n8n’s workflow builder for all my logic (agents, tools, memory, whatever) and then just point Open WebUI, LibreChat, or any OpenAI API compatible tool at it. my n8n workflow gets the messages, does its thing, and sends back responses.
setup: pretty straightforward - map your n8n webhook URLs to model names in a json file, set a bearer token for auth, docker compose up. example workflow is included.
if you run into issues enable LOG_REQUESTS=true to see what’s happening. not trying to replace anything, just found this useful for my homelab and figured others might want it too.
Can you elaborate what on top if litellm means? At one point I had the bridge proxied via LiteLLM into OpenWebUi:
OpenWebUI → LiteLLM → n8n-openai-bridge → n8n.
Then I thought why not just set an alternative endpoint to the OpenAI connection in OpenWebUI and skip the LiteLLM middleman.
Is that what you have been referring to? If not happy to speak further.
Update: I never used the advanced features of LiteLLM, just the standard distribution that comes with a OpenWebUI starter kit I found online. It was the container image spec for docker compose bundled with OpenWebUI and a LiteLLM config with Antrophic and OpenAI examples.
Got it. Tell me if I get this wrong, but a custom integration is not needed as long as the OpenAI API specification is sufficient.
Are there LiteLLM features that would require more then what OpenAI-compatible APIs offer and have a matching functionality on n8n side? If so, I think it is not a challenge having the bridge support multiple „dialects“.
Hey, Thank you for sharing. I tried the bridge and it works well, both in OpenWebUI and LiteLLM!
I’m wondering if this can be adapted to work directly in N8N rather than being an external bridge.
Maybe through a custom Webhook node integration?
Someone had this exact same idea here: OpenAI-Compatible Webhook Node
Hello @ElhiK, thank you for testing it and the feedback. Anything you are missing or not working as expected?
I think a feature like this can be part of n8n itself. Eventually it is a dialect for any webhook node and with the public hosted and embedded chat interfaces there are non-traditional webhook use cases already in n8n’s core.
I am sceptical tho if n8n folks want this because:
there’s two specs: chat completion and response API, for proper support you want to support eventually also the response API (my bridge does not yet) → I think the n8n team already deals with this as their Agent nodes also support the new API, it just means more features in n8n need to support multiple approaches, this is maintenance effort
there’s even more use expectations for things to work out of the box that needs to be managed so people are well educated to not trigger support issue. Today openai models behave like single agent models towards the user, it is a 1-1 relationship. with n8n and an openai compatible endpoints you can put complex multi-agent setups that have less/more/equal features available than what openai supports with their agents. So you really need to know that if you make your workflow accessible via a openai wrapper it does not support magically all the openai features and it is up to the workflow builder to educate their end users (or themselves).
Having said that, we use it at roadsurfer with a pilot group of 100 people. We use Librechat and multiple workflows that are multi-agentic. It works well
Thanks @sveneisenschmidt for the detailed answer. Maybe this could be just a community node rather an official n8n one .
Since you asked about feedback, the one issue I have currently is dealing with file attachments in a chat. (this is not specific to your bridge). OpenwebUI sends the files in a base64 encoded url field and in many cases it triggers a token error. You have to handle converting it back to a file binary in your workflow before sending it to an agent.
I had the same issue before with n8n pipe functions. Do you think this can be handled by your bridge?
Hi @ElhiK, I released a new version that includes a proper solution to file uploads. To keep backwards compatibility you need to set a new env var FILE_UPLOAD_MODE=extract-multipart but then n8n is able to pickup the binary data by default. No special handling inside the workflow is required for this.