Chat returns error when asked from public chat on same local network

Describe the problem/error/question

I have a basic chat that works correctly when I activate “Make Chat Publicly Available” and if I make the call from localhost it works great.

But if I make the call from another computer on the same local network, changing localhost for its IP along with the security token, the WEBCHAT interface appears, but when I ask a question, it responds with Error: Failed to receive response

Again I indicate that in localhost, everything works correctly. I have tried adding some things to the Docker YML as per some other posts such as

- N8N_HOST=10.232.30.71

But these do not fixt the issue. I even put them in two spots since I wasn’t sure of the difference between the x-n8n section and the n8n section in the yml

I am at my wits end, some help would be greatly appreciated. :frowning:

**** My docker yml……

volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:

networks:
demo:

x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: [‘demo’]
environment:

  • DB_TYPE=postgresdb
  • DB_POSTGRESDB_HOST=postgres
  • DB_POSTGRESDB_USER=${POSTGRES_USER}
  • DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
  • N8N_HOST=10.232.30.71
  • N8N_LISTEN_ADDRESS=0.0.0.0
  • WEBHOOK_URL=https://10.232.30.71:5678
  • N8N_DIAGNOSTICS_ENABLED=false
  • N8N_PERSONALIZATION_ENABLED=false
  • N8N_SECURE_COOKIE=false
  • N8N_ENCRYPTION_KEY
  • N8N_USER_MANAGEMENT_JWT_SECRET
  • OLLAMA_HOST=${OLLAMA_HOST:-ollama:11434}
    env_file:
  • path: .env
    required: true

x-ollama: &service-ollama
image: ollama/ollama:latest
container_name: ollama
networks: [‘demo’]
restart: unless-stopped
ports:

  • 11434:11434
    volumes:
  • ollama_storage:/root/.ollama

x-init-ollama: &init-ollama
image: ollama/ollama:latest
networks: [‘demo’]
container_name: ollama-pull-llama
volumes:

  • ollama_storage:/root/.ollama
    entrypoint: /bin/sh
    environment:
  • OLLAMA_HOST=ollama:11434
    command:
  • “-c”
  • “sleep 3; ollama pull llama3.2”

services:
postgres:
image: postgres:16-alpine
hostname: postgres
networks: [‘demo’]
restart: unless-stopped
environment:

  • POSTGRES_USER
  • POSTGRES_PASSWORD
  • POSTGRES_DB
    volumes:
  • postgres_storage:/var/lib/postgresql/data
    healthcheck:
    test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
    interval: 5s
    timeout: 5s
    retries: 10

n8n-import:
<<: *service-n8n
hostname: n8n-import
container_name: n8n-import
entrypoint: /bin/sh
command:

  • “-c”
  • “n8n import:credentials --separate --input=/demo-data/credentials && n8n import:workflow --separate --input=/demo-data/workflows”
    volumes:
  • ./n8n/demo-data:/demo-data
    depends_on:
    postgres:
    condition: service_healthy

n8n:
<<: *service-n8n
hostname: n8n
container_name: n8n
restart: unless-stopped
ports:

  • 5678:5678
    environment:
  • N8N_HOST=10.232.30.71
  • N8N_LISTEN_ADDRESS=0.0.0.0
  • WEBHOOK_URL=https://10.232.30.71:5678
  • N8N_DIAGNOSTICS_ENABLED=false
  • N8N_PERSONALIZATION_ENABLED=false
  • N8N_SECURE_COOKIE=false
  • N8N_ENCRYPTION_KEY
  • N8N_USER_MANAGEMENT_JWT_SECRET
    volumes:
  • n8n_storage:/home/node/.n8n
  • ./n8n/demo-data:/demo-data
  • ./shared:/data/shared
    depends_on:
    postgres:
    condition: service_healthy
    n8n-import:
    condition: service_completed_successfully

qdrant:
image: qdrant/qdrant
hostname: qdrant
container_name: qdrant
networks: [‘demo’]
restart: unless-stopped
ports:

  • 6333:6333
    volumes:
  • qdrant_storage:/qdrant/storage

ollama-cpu:
profiles: [“cpu”]
<<: *service-ollama

ollama-gpu:
profiles: [“gpu-nvidia”]
<<: *service-ollama
deploy:
resources:
reservations:
devices:

  • driver: nvidia
    count: 1
    capabilities: [gpu]

ollama-gpu-amd:
profiles: [“gpu-amd”]
<<: *service-ollama
image: ollama/ollama:rocm
devices:

  • “/dev/kfd”
  • “/dev/dri”

ollama-pull-llama-cpu:
profiles: [“cpu”]
<<: *init-ollama
depends_on:

  • ollama-cpu

ollama-pull-llama-gpu:
profiles: [“gpu-nvidia”]
<<: *init-ollama
depends_on:

  • ollama-gpu

ollama-pull-llama-gpu-amd:
profiles: [gpu-amd]
<<: *init-ollama
image: ollama/ollama:rocm
depends_on:

  • ollama-gpu-amd

What is the error message (if any)?

Error: Failed to receive response

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: self hosted ai starter kit
  • Database (default: SQLite): Qdrant
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): DOCKER
  • Operating system: WIn11

Why don’t you make it publically available using ngrok instead of making it available to local network?
it will solve other issues too.

I appreciate the helpful video but I don’t want it to be available to anyone outside of the local network. From my understanding Grok would allow me to access it from another city, yet I only want to be able to access it from inside the local network.

FIXED - I changed the compose yml, restarted docker desktop, rebooted, all kinds of things and it didn’t seem to affect n8n no matter what i changed in the yml

turns out I had to go to the command line and manually run the command

docker-compose up

this started all of the docker desktop containers (except for ollama oddly). and now it works.

1 Like