Hello everyone,
I want to create my own AI that can listen and talk (everythin locally). I found a good code that has many functions that I what, but it is for Ollama and other LLMs, not n8n (if I use Ollama in this code and you change the link then it will send the information to n8n), but it seems n8n does not respond correctly - no error codes are given unless I use my SQL DB. My question is: how can I combine these two?
the code in question: GitHub - t41372/Open-LLM-VTuber: Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking face running locally across platforms
i am running n8n in Docker and i use Win 11
I would appreciate your help.
Best regards