I’m Ollama Model widget, I set the credentials successfully. Ollama is up and running. But LLama2 model is not accessible. I tried with orca-mini, but didn’t work.
Secondly, I can’t get a list of the models either.
Thank you!
What is the error message (if any)?
Error in sub-node ‘Ollama Model‘
fetch failed Open node
Issues:
There was a problem loading the parameter options from server: “The service refused the connection - perhaps it is offline”
ERROR:
fetch failed
Details
Time
14.11.2023 17:18:49
Cause
Data below may contain sensitive information. Proceed with caution when sharing.
How are you running n8n? I have seen similar issues to this when n8n is being ran in docker as the 127.0.0.1 address is local to the container not the host machine, In cases like this setting the correct IP address tends to get it working.
You will need to configure Ollama to listen on an address that can be connected to from inside a container. Failing that you might be able to get away with something like 172.17.0.1 but that would depend on configuration.
ERROR: Couldn’t connect with these settings Request failed with status code 403
Anybody can help me?
Using n8n:ai-beta on WSL2 DockerDesktop and Ollama
I can access Ollama:
kmkarakaya@DESKTOP-AMT61DR:~$ curl -X POST http://localhost:11434/api/generate -d ‘{
“model”: “llama2”,
“prompt”: “[INST] why is the sky blue? [/INST]”,
“raw”: true,
“stream”: false
}’
{“model”:“llama2”,“created_at”:“2023-11-15T20:04:53.31877153Z”,“response”:" The sky appears blue because of a phenomenon called Rayleigh scattering, which occurs when sunlight enters Earth’s atmosphere. Hinweis: This answer is a simplified explanation and does not take into account all the complexities of the atmosphere and light scattering.\n\nWhen sunlight enters the atmosphere, it encounters tiny molecules…}
If you are still using localhost or 127.0.0.1 it won’t work as that is local to the machine you run it on so on your desktop that will be localhost on that but in a container it will be local to the container.
You will neeed to set the OLLAMA_HOST environment variable for Ollama to the address you want to use. I would start with OLLAMA_HOST=0.0.0.0 which will allow it to bind to all addresses which should then allow it to work using the IP of your desktop or the container for ollama if that is also in Docker.
When both services run in Docker container, you need to makes sure they are connected to the same network to be able to communicate with each other. Typically when running both locally it is done automatically with the default network called “bridge”. It is also possible to make sure both services are running on the same network by providing the corresponding option when the container is created. You could also create your own custom network.
Once they are both on the same network, you can use the container name as the host. In the above example, the container name was called “ollama” and it listens on the port 11434. Therefore, to set up credentials you would use the Base URL http://ollama:11434.
Alternatively, if no name given to the container, you can check the container IP address (given the network name we provided in the above example “n8n_network”):
# check all the networks
docker network ls
# check the containers connected to your network
docker network inspect n8n_network
>>>
"Containers": {
"0ad64eeaa1845d254a0e26c672912b1d36fe1db91b280ca352d371e5e62cf38d": {
"Name": "ollama",
"EndpointID": "7c7932518a0376077435be0e0708999e56ae8fc7ad3f3af990b82da27f7e9390",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
Note the IP adress of “ollama” container is 172.18.0.3. Hence the below should also work.