Ok, so we now know that accessing ollama from n8n container is possible, we were able to reach it through port 11434 and see see it listening on port 11434 in the netstat output.
Interesting… ![]()
Ok, so we now know that accessing ollama from n8n container is possible, we were able to reach it through port 11434 and see see it listening on port 11434 in the netstat output.
Interesting… ![]()
Any chances you are running an old version of n8n?
I would expect it to fail as 127.0.0.1 is local to the container not the host and as the services are not running in the container it won’t be listening.
Try using something like 172.17.0.1 or the ip of the host / container running.
but n8n container is bound to the host network, exposing it to the host netw stack.
this is confirmed with docker inspect and also through running curl http://127.0.0.1:11434 from the container itself.
You haven’t configured any proxies in this setup?
env | egrep -i 'http_proxy|https_proxy|all_proxy|no_proxy'
Any firewalls running?
As far as I know, there is no firewall or proxies setup in the beginning
So a env | egrep -i 'http_proxy|https_proxy|all_proxy|no_proxy' inside n8n container would produce an empty output?
By the way, what happens if you ignore the error message in the credentials and just try to use it anyway? Will it fail when an AI agent attempts to reach the model?
Also what is your version of n8n? n8n --version
let’s give it a go anyway ![]()
The netstat command also wasn’t showing n8n running so maybe we should confirm the command was launched in the n8n container.
Ah good spot I missed that, I will go back to bed ![]()
I should probably do that too, given it’s almost 2am ![]()
Almost 6am here
Ouch, literally nothing could make me get out of the bed at this time haha
Sorry for kind of delay here
There is no output in host terminal neither inside bash n8n
N8N version is 1.103.2
Please get some rest. Thank you for all your kind help
The solution to this case is running Ollama on Docker.
Hello again!
That would most definitely help, as well as not running containers bound to the host network.