could someone please do a video on n8n setup using a ollama and docker and the self-hosted-ai-starter-kit please?
I spent most the weekend reading through everything, trying to get the base URL to be accepted so that I could see the Ollama models that I have installed on my Ubuntu computer. was not successful. the videos that I watched are for Windows and for Mac and there is obviously very different settings needed for Linux based systems. so an instructional video would be extremely appreciated.
hi @bartv,yes I was using the self hosting repo. and I can clone everything spin up docker and get the gui started, but when I go to change the base URL, so I can see the LOL models that I have under Obama I am continuously met with an error when I try to save the base URL.
I have changed various environment variables inside of a ollama service file, inside of the yaml file for docker, nothing works. so seeing somebody walk through the actual steps using Ubuntu Ollama, the self starter kit and docker would be very helpful because the person would run into the same issue that I’m running into and be able show the solution.
The steps should be the same no matter which OS which is why we took the Docker approach. If you are running Ollama locally on your host and everything else in Docker you may need to set the host for Ollama so that Docker can connect to it.
Found initial working method, when following the nvidia instructions provided on n8n site, after cloning the “/self-hosted-ai-starter-kit” repo, go to into that repo directory, then using nano/vim, edit the “docker-compose.yml” file, change the ports for ollama (see) to prevent the conflicts.
ports:
“11435:11434”
save file, then run: “docker compose --profile gpu-nvidia up”. Depending on your gpu, this will take awhile to run. (I have a rtx3060) This method still pulls the included llama3 llm, not the llm’s that i have stored on my ubuntu machine. I will figure that out another time. but this is a start.