Issues with running n8n with ollama locally

Hello everyone so I did everything needed for running n8n with ollama locally (installing ollama, downloading some models, n8n ai starter kit, installing docker, …) I accessed n8n locally with http://localhost:5678/ as guided, and here’s the docker setting, but when I want to fetch the agent with the ollama model (with the base URL http://localhost:11434, also as guided), it doesn’t connect, and I uploaded the errors on both n8n’s and docker’s sides.

When I click on n8n to connect with Docker, it errors:
The service refused the connection - perhaps it is offline. n8n | connect ECONNREFUSED ::1:11434

Can you guys help me on this? I’ll be appreciated. Been thinking of just breaking this PC for two days lol.





services:
  database:
    image: postgis/postgis:13-master
    restart: always
    ports:
      - 5432:5432
    volumes:
      - ./postgres/data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: "n8n"
    healthcheck:
      test: [ 'CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d n8n' ]
      interval: 5s
      timeout: 5s
      retries: 10

  n8n:
    image: n8n
    restart: always
    build:
      dockerfile: n8n/Dockerfile
    ports:
      - 5678:5678
    volumes:
      - ./n8n/data:/home/node/.n8n
      - ./n8n/credentials:/home/node/credentials
      - ./n8n/workflows:/home/node/workflows
    environment:
      DB_POSTGRESDB_DATABASE: "n8n"
      DB_POSTGRESDB_HOST: "database"
      DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_POSTGRESDB_PORT: "5432"
      DB_POSTGRESDB_USER: ${POSTGRES_USER}
      DB_TYPE: "postgresdb"
      DOMAIN_NAME: "n8n.example.com"
      EXECUTIONS_DATA_MAX_AGE: 168
      EXECUTIONS_DATA_PRUNE: true
      EXECUTIONS_DATA_PRUNE_MAX_COUNT: 50000
      EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS: false
      EXECUTIONS_DATA_SAVE_ON_ERROR: "none"
      EXECUTIONS_DATA_SAVE_ON_PROGRESS: false
      EXECUTIONS_DATA_SAVE_ON_SUCCESS: "none"
      GENERIC_TIMEZONE: "Europe/Vienna"
      N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: true
      N8N_LOG_LEVEL: "error"
      NODE_FUNCTION_ALLOW_BUILTIN: "*"
      NODE_FUNCTION_ALLOW_EXTERNAL: "*"
      TZ: "Europe/Vienna"
      WEBHOOK_URL: "https://n8n.example.com"
      N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
    links:
      - database
      - redis
    depends_on:
      redis:
        condition: service_healthy
      database:
        condition: service_healthy

  redis:
    image: redis:latest
    restart: always
    ports:
      - "6379:6379"
    environment:
      REDIS_PASSWORD: ${REDIS_PASSWORD}
      REDIS_PORT: 6379
      REDIS_DATABASES: 16
    volumes:
      - ./redis/data:/data
      - ./redis/redis.conf:/usr/local/etc/redis/redis.conf
    command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 5s
      timeout: 5s
      retries: 10

  ollama:
    image: ollama/ollama:latest
    restart: always
    ports:
      - "11434:11434"
    volumes:
      - ./ollama:/root/.ollama
    environment:
      - OLLAMA_MODELS=/root/.ollama/models

How do you start the container?

The ports are only available to your host but not to each other.

A solution using Docker Compose would be relatively simple.

All containers would then run in a network.

You can access other containers using the service name (e.g., ollama:11434) instead of localhost:11434.

Without docker-compose you have to create a network first
docker network create example-network
Then you have to start the containers with the network.
docker run -d --name ollama --network example-network -p 11434:11434 ollama/ollama:latest
docker run -d --network example-network http://docker.n8n.io/n8nio/n8n
Then you can access ollama from your n8n container with ollama:11434.

2 Likes

THANK YOU SO MUCH SIR IT’S NOW CONNECTED REALLY APPRECIATED. But the only thing is actually I downloaded 3 models for ollama but I couldnt find and select it is the n8n setup for the model. There’s only Ollama 3.2. Can you please help me on that too? tnx
Also, since I’m super amateur, I just ran those three lines at the end of your answer in the Docker terminal, and it worked and connected just fine. Maybe I didn’t follow your instructions completely somewhere because I wasn’t very skilled.





tnx again.

Did you download the 3 models locally on Windows instead of in the container?

Please try running docker exec -it ollama ollama list. The first “ollama” is the name of your container.

The screenshot briefly explains how to find the container name (mine is n8n because I don’t have ollama running).

1 Like

Hello and THANK YOU AGAIN, SIR.
Thanks to your help, I was able to realize I didn’t download the models on the container, so with this command on the docker, I downloaded one of them:
docker exec ollama ollama pull gemma3:4b
When I got back to N8N, fortunately a new model was added, but I don’t know why it doesn’t respond to my input.
I’m really sorry I bother you a lot. I’ll be very appreciative if you can help me also with this. Thanks again.



edit: i added ‘deepseek-r1:8b’ to the models in docker again and it works just fine in n8n local this time idk what is it’s problem with gemma3.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.