Using external code for voice control, but it was originally designed to use Ollama

Hello everyone,

I want to create my own AI that can listen and talk (everythin locally). I found a good code that has many functions that I what, but it is for Ollama and other LLMs, not n8n (if I use Ollama in this code and you change the link then it will send the information to n8n), but it seems n8n does not respond correctly - no error codes are given unless I use my SQL DB. My question is: how can I combine these two?

the code in question: GitHub - t41372/Open-LLM-VTuber: Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking face running locally across platforms

i am running n8n in Docker and i use Win 11

I would appreciate your help.

Best regards

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.