Hello everyone so I did everything needed for running n8n with ollama locally (installing ollama, downloading some models, n8n ai starter kit, installing docker, …) I accessed n8n locally with http://localhost:5678/ as guided, and here’s the docker setting, but when I want to fetch the agent with the ollama model (with the base URL http://localhost:11434, also as guided), it doesn’t connect, and I uploaded the errors on both n8n’s and docker’s sides.
When I click on n8n to connect with Docker, it errors:
The service refused the connection - perhaps it is offline. n8n | connect ECONNREFUSED ::1:11434
Can you guys help me on this? I’ll be appreciated. Been thinking of just breaking this PC for two days lol.
The ports are only available to your host but not to each other.
A solution using Docker Compose would be relatively simple.
All containers would then run in a network.
You can access other containers using the service name (e.g., ollama:11434) instead of localhost:11434.
Without docker-compose you have to create a network first docker network create example-network
Then you have to start the containers with the network. docker run -d --name ollama --network example-network -p 11434:11434 ollama/ollama:latest docker run -d --network example-network http://docker.n8n.io/n8nio/n8n
Then you can access ollama from your n8n container with ollama:11434.
THANK YOU SO MUCH SIR IT’S NOW CONNECTED REALLY APPRECIATED. But the only thing is actually I downloaded 3 models for ollama but I couldnt find and select it is the n8n setup for the model. There’s only Ollama 3.2. Can you please help me on that too? tnx
Also, since I’m super amateur, I just ran those three lines at the end of your answer in the Docker terminal, and it worked and connected just fine. Maybe I didn’t follow your instructions completely somewhere because I wasn’t very skilled.
Hello and THANK YOU AGAIN, SIR.
Thanks to your help, I was able to realize I didn’t download the models on the container, so with this command on the docker, I downloaded one of them: docker exec ollama ollama pull gemma3:4b
When I got back to N8N, fortunately a new model was added, but I don’t know why it doesn’t respond to my input.
I’m really sorry I bother you a lot. I’ll be very appreciative if you can help me also with this. Thanks again.