Unable to Connect n8n to Local Ollama Instance (ECONNREFUSED Error)

Describe the problem/error/question

I am trying to integrate n8n with the Ollama model installed locally on my PC. However, I am unable to establish a connection. I have tried configuring the Base URL with the following values:

  • http://localhost:11434/
  • http://127.0.0.1:11434/

Despite multiple attempts, the connection fails, displaying an ECONNREFUSED error.

I have searched the documentation, YouTube tutorials, and online resources, but none of them mention encountering this issue. Most guides simply use one of the above URLs, and Ollama connects successfully.

I would appreciate any guidance on resolving this issue.

What is the error message (if any)?

Couldn’t connect with these settings ECONNREFUSED


Please share your workflow/screenshots/recording

(Screenshots attached)

Screenshot 1
Screenshot 2

Share the output returned by the last node

N/A (The issue occurs during the connection setup, preventing workflow execution.)

#[details=ā€œinstance informationā€]

Debug info

core

  • n8nVersion: 1.82.3
  • Platform: Windows 10 (Local Installation)
  • Node.js Version: 20.18.3
  • Database: SQLite
  • Execution Mode: Regular
  • Concurrency: 5
  • License: Community

storage

  • Success logs: All
  • Error logs: All
  • Progress tracking: Disabled
  • Manual executions: Enabled
  • Binary Mode: Filesystem

pruning

  • Enabled: Yes
  • Max Age: 168 hours
  • Max Count: 2500 executions

client

  • User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36
  • Is Touch Device: No

Generated at: 2025-03-18T12:07:41.379Z
[/details]

1 Like

@Sahil_Kayastha can you give a little more detail about your Ollame installation? Is the GUI http://localhost:3000 accessible? Is there a local firewall blocking the port?
Do you know the this docu: ollama common-issues

I have the same issue. @Sahil_Kayastha did you manage to find a solution?

In my case I’m running Ollama locally (not in a container) and my n8n workflow is hosted in the n8n cloud. Both http://localhost:11434/ and http://127.0.0.1:11434/ return 200 status and Ollama is running text. According to the documentation this should be enough but I am also getting ECONNREFUSED

I still get the same and I have tried [localhost]/api/tags on an n8n HTTP Request node which results in the same error, whereas a curl or call from the browser is OK. This means the issue is ā€œsimplyā€ that localhost won’t accept external connections.

Setting up n8n locally resolves the issue although means importing or recreating your workflow locally - I’m a novice on this platform so maybe there’s an easy way to transfer workflows between the cloud and local.

To set up n8n locally:

npm install -g n8n
n8n start

Allowing connections from the n8n cloud to Ollama localhost could also work but requires managing security issues, so I’m NOT recommending it.

If you want ollama to listen on all network devices on your machine then you’ll need to set the following environment variable:

export OLLAMA_HOST="0.0.0.0"

Then just start ollama with: ollama serve.

There is a nice FAQ how to run ollama as a server instance: ollama/docs/faq.md at main Ā· ollama/ollama Ā· GitHub

Also if your ollama server is behind a firewall (NAT etc) you’ll need to portforward the port from your firewall/router to your machine.

1 Like

turn off your ipv6 ,set url as http://127.0.0.1(or real ip):11434,than restart n8n.

connect with this:
http://host.docker.internal:11434
I had the same issue when running the ollama models locally but not in a container

5 Likes

@Sahil_Kayastha were you able to find a solution?

You have this error because n8n inside his own container cannot see the model in ollama.

Solution 1:

Create a network

docker network create n8n-network

run Ollama on this network

docker run -d --name ollama --network n8n-network -p 11434:11434 ollama/ollama

run n8n on the same network

docker run -it --rm --name n8n --network n8n-network -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n

In ollama use :

http://ollama:11434

Solution 2:

You can also use a docker compose file docker-compose.yml to do the same things

version: ā€˜3.8’
services:
ollama:
image: ollama/ollama
container_name: ollama
ports:
- ā€œ11434:11434ā€
volumes:
- ollama_data:/root/.ollama

n8n:
image: docker.n8n.io/n8nio/n8n
container_name: n8n
ports:
- ā€œ5678:5678ā€
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- ollama
environment:
- N8N_HOST=0.0.0.0

volumes:
ollama_data:
n8n_data:

Use http://ollama:11434

And That’s it !!!