Describe the problem/error/question
I have a basic chat that works correctly when I activate “Make Chat Publicly Available” and if I make the call from localhost it works great.
But if I make the call from another computer on the same local network, changing localhost for its IP along with the security token, the WEBCHAT interface appears, but when I ask a question, it responds with Error: Failed to receive response
Again I indicate that in localhost, everything works correctly. I have tried adding some things to the Docker YML as per some other posts such as
- N8N_HOST=10.232.30.71
- N8N_LISTEN_ADDRESS=0.0.0.0
- WEBHOOK_URL=https://10.232.30.71:5678
But these do not fixt the issue. I even put them in two spots since I wasn’t sure of the difference between the x-n8n section and the n8n section in the yml
I am at my wits end, some help would be greatly appreciated.
**** My docker yml……
volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:
networks:
demo:
x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: [‘demo’]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_HOST=10.232.30.71
- N8N_LISTEN_ADDRESS=0.0.0.0
- WEBHOOK_URL=https://10.232.30.71:5678
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_SECURE_COOKIE=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
- OLLAMA_HOST=${OLLAMA_HOST:-ollama:11434}
env_file: - path: .env
required: true
x-ollama: &service-ollama
image: ollama/ollama:latest
container_name: ollama
networks: [‘demo’]
restart: unless-stopped
ports:
- 11434:11434
volumes: - ollama_storage:/root/.ollama
x-init-ollama: &init-ollama
image: ollama/ollama:latest
networks: [‘demo’]
container_name: ollama-pull-llama
volumes:
- ollama_storage:/root/.ollama
entrypoint: /bin/sh
environment: - OLLAMA_HOST=ollama:11434
command: - “-c”
- “sleep 3; ollama pull llama3.2”
services:
postgres:
image: postgres:16-alpine
hostname: postgres
networks: [‘demo’]
restart: unless-stopped
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
volumes: - postgres_storage:/var/lib/postgresql/data
healthcheck:
test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
interval: 5s
timeout: 5s
retries: 10
n8n-import:
<<: *service-n8n
hostname: n8n-import
container_name: n8n-import
entrypoint: /bin/sh
command:
- “-c”
- “n8n import:credentials --separate --input=/demo-data/credentials && n8n import:workflow --separate --input=/demo-data/workflows”
volumes: - ./n8n/demo-data:/demo-data
depends_on:
postgres:
condition: service_healthy
n8n:
<<: *service-n8n
hostname: n8n
container_name: n8n
restart: unless-stopped
ports:
- 5678:5678
environment: - N8N_HOST=10.232.30.71
- N8N_LISTEN_ADDRESS=0.0.0.0
- WEBHOOK_URL=https://10.232.30.71:5678
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_SECURE_COOKIE=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
volumes: - n8n_storage:/home/node/.n8n
- ./n8n/demo-data:/demo-data
- ./shared:/data/shared
depends_on:
postgres:
condition: service_healthy
n8n-import:
condition: service_completed_successfully
qdrant:
image: qdrant/qdrant
hostname: qdrant
container_name: qdrant
networks: [‘demo’]
restart: unless-stopped
ports:
- 6333:6333
volumes: - qdrant_storage:/qdrant/storage
ollama-cpu:
profiles: [“cpu”]
<<: *service-ollama
ollama-gpu:
profiles: [“gpu-nvidia”]
<<: *service-ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
ollama-gpu-amd:
profiles: [“gpu-amd”]
<<: *service-ollama
image: ollama/ollama:rocm
devices:
- “/dev/kfd”
- “/dev/dri”
ollama-pull-llama-cpu:
profiles: [“cpu”]
<<: *init-ollama
depends_on:
- ollama-cpu
ollama-pull-llama-gpu:
profiles: [“gpu-nvidia”]
<<: *init-ollama
depends_on:
- ollama-gpu
ollama-pull-llama-gpu-amd:
profiles: [gpu-amd]
<<: *init-ollama
image: ollama/ollama:rocm
depends_on:
- ollama-gpu-amd
What is the error message (if any)?
Error: Failed to receive response
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
- n8n version: self hosted ai starter kit
- Database (default: SQLite): Qdrant
- n8n EXECUTIONS_PROCESS setting (default: own, main): main
- Running n8n via (Docker, npm, n8n cloud, desktop app): DOCKER
- Operating system: WIn11