Wouldn’t it be great to just use our most intuitive way to interact with our automations - our voice? And wouldn’t it be great to have a single Personal Assistant, which then talks to other already existing Agents?
That’s where Vagent comes in. A lightweight Voice Interface, you and your entire organization can use to interact with an AI Supervisor, a personal AI Assistant, which has access to your custom workflows.
Check out the video down below, to see a Demo and understand how it works in detail.
To download the App, read the Docs and get a Multi-Agent workflow template, visit: https://vagent.io
Thanks for the great work and for keeping it open-source. I’ve been using Vagent, and it’s working perfectly. I was thinking, what if we could take it a step further by integrating OpenAI’s real-time voice agent? This could allow us to attach tools that can be executed in real-time using voice commands. For example, we could set up N8n workflows as webhooks and call them directly through the voice agent, bypassing the need for transcription. The agent could trigger workflows simply by voice commands, enabling a more seamless interaction. What are your thoughts on this idea?
That’s actually a cool idea to implement the new real-time API. I just heard a couple of days ago that it was released. It should cut off around 2 seconds in response time.
Calling predefined commands sounds like a step backwards to me. It reminds me of DOS, where you had to memorize all the commands to get things done on a computer. Instead, I designed the app and multi-agent in a way that allows for more natural conversations without needing to know the processes in the background. Something even non-technical coworkers can use easily.
I think switching to low-latency models from Groq could be a solution to speed up the overall response time.
Fantastic work, thank you. I’m just starting to play with it. If you do make the switch to the Realtime API, it would be great if there was an option to use an Azure deployment of OpenAI Realtime API.
Wow, really nice work with lots attention to details!
In my wish list I would add the option for a OpenAI base URL so we can point to a proxy such as LiteLLM or compatible providers such as Openrouter.
LiteLLM can serve as proxy for Azure OpenAI including the realtime-preview model.