I just installed the software and I’m exploring its features—it looks great so far! However, I ran into a little snag. It initially came with the llama 3.1 model by default, and I decided to upgrade to the 70B model through the Ollama WebUI admin panel. The panel indicates that the download was successful, but it’s not appearing in my n8n workflow; I can only see the default model. Also, I didn’t install the Ollama for Windows, since I assumed it would run inside Docker. Any tips or advice would be greatly appreciated!
Thanks for posting here and welcome to the community!
Glad you figured it out!
We might actually add a port-mapping in the docker compose file to expose the ollama port on the host so that it’s more obvious which ollama instance is being used