Error “Could not connect to your MCP server” when integrating external tool via SSE in AI Agent

I am hosting my n8n in a docker container.
It also failed with the same message when I tested it.
I changed the MCP Client SSE Endpoint url as follows: replace localhost with host.docker.internal in the parameter.

For example:

http://localhost:5678/mcp/mytools/ssehttp://host.docker.internal:5678/mcp/mytools/sse

“localhost” did not work because the container could not resolve that name.

This is an interesting approach, thanks. :+1:

Anyway, try using the container name as it would work fine in your case and would be the correct way.

The container name will resolve directly to the container’s dynamic IP.

Specialized Guide: Fixing “Could not connect to your MCP server” in self‑hosted n8n

Goal Document the root cause and the definitive fix so you can replicate (or adapt) it whenever you deploy n8n elsewhere.


1 Environment snapshot

Component Detail
n8n v1.88.0 in Docker, internal port 5678
Reverse‑proxy Nginx (ports 80/443) with Let’s Encrypt certs
Hosting Virtual machine / droplet (e.g., Digital Ocean)
Feature hit MCP Server Trigger & MCP Client living in the same n8n instance

No private IPs, keys or credentials appear in this guide.


2 Symptom

Running the MCP Client node throws:

Error: Could not connect to your MCP server

3 Root causes

# Cause Explanation
1 Wrong URL https://your-domain.com:5678/… was used.
Port 5678 only speaks HTTP ⇒ TLS handshake fails.
2 Nginx & SSE Nginx had gzip on and proxy_buffering on.
Server‑Sent Events need a raw, continuous stream; compression or buffering breaks it.

4 Step‑by‑step fix

### 4.1 Correct the URL in the MCP Client node

https://your-domain.com/mcp/<trigger-id>/sse      ← ✅ correct
https://your-domain.com:5678/mcp/…                ← ❌ wrong

### 4.2 Adjust Nginx for the SSE path

  1. Edit /etc/nginx/sites-enabled/n8n (or your own server block) and, inside the server {} block, paste this just before/after the main location /:
# --- SSE exception for MCP ---
location /mcp/ {
    proxy_pass         http://127.0.0.1:5678;
    proxy_http_version 1.1;
    proxy_set_header   Connection '';
    proxy_set_header   Host $host;

    proxy_buffering    off;
    proxy_cache        off;
    gzip               off;

    proxy_read_timeout 3600;
    proxy_send_timeout 3600;
}
# --- end SSE exception ---
  1. Validate & reload:
sudo nginx -t && sudo systemctl reload nginx

### 4.3 Quick verification

Where Command Expected outcome
Inside the container `wget -qO- http://localhost:5678/mcp//sse head` Streaming event: / data: lines
Through Nginx (host) `curl -vk https://your-domain.com/mcp//sse head` HTTP/2 200 and stream

5 Full Nginx server‑block template

server {
    listen 80;
    listen 443 ssl;
    server_name your-domain.com;

    ssl_certificate     /etc/letsencrypt/live/your-domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;

    # 1) General n8n traffic
    location / {
        proxy_pass         http://localhost:5678;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection "upgrade";
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }

    # 2) SSE exception for MCP
    location /mcp/ {
        proxy_pass         http://127.0.0.1:5678;
        proxy_http_version 1.1;
        proxy_set_header   Connection '';
        proxy_set_header   Host $host;
        proxy_buffering    off;
        proxy_cache        off;
        gzip               off;
        proxy_read_timeout 3600;
        proxy_send_timeout 3600;
    }
}

6 Replicating the fix on a new server

  1. Deploy n8n in Docker
docker run -d --name n8n -p 5678:5678 \
  -e N8N_HOST=your-domain.com \
  -e N8N_PROTOCOL=https \
  -e WEBHOOK_URL=https://your-domain.com/ \
  n8nio/n8n:latest
  1. Install Nginx + Certbot (Ubuntu):
sudo apt update && sudo apt install nginx certbot python3-certbot-nginx
sudo certbot --nginx -d your-domain.com
  1. Create /etc/nginx/sites-enabled/n8n.conf — copy the template above and change server_name + cert paths.

  2. Reload Nginx

sudo nginx -t && sudo systemctl reload nginx
  1. In n8n → Create the MCP Server Trigger and reference it in the MCP Client using the domain URL (no :5678).

7 Quick troubleshooting checklist

:heavy_check_mark: Question
Does the MCP Client URL use HTTPS without :5678?
Does location /mcp/ have gzip off and proxy_buffering off?
Does nginx -t report syntax is ok?
Does curl -vk https://your-domain.com/mcp/.../sse return 200?
Are certificates valid (not expired)?
Are N8N_HOST, N8N_PROTOCOL, WEBHOOK_URL env vars correctly set?

8 Helpful resources

  • Community thread: Error “Could not connect to your MCP server” – n8n forum
  • n8n docs → MCP Integration
  • Nginx docs → Serving Server‑Sent Events

Done. Use this guide to reproduce the fix anywhere and diagnose similar SSE/MCP issues in minutes.

2 Likes

@PhGeek Hey! Bro can you please help me? I tried everything and doesn’t work. In my case is hosted from a friend. I have tried everything and still having the problem.

me sucedió el mismo error yo uso nginx y esta fue la solucion que le di:
desabilité la compresión GZIP solo para MCP dejé el resto de la conección con la compresión normal; esta es la configuración que usé:

 # Configuración específica para MCP
    location /mcp {
        proxy_pass http://localhost:3000/mcp; # puerto definido por el usuario
        proxy_http_version 1.1;
        
        # Deshabilitar compresión específicamente
        proxy_set_header Accept-Encoding "";
        
        # Configuración para WebSockets/SSE
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        
        # Timeouts extendidos
        proxy_connect_timeout 7d;
        proxy_send_timeout 7d;
        proxy_read_timeout 7d;
        
        # Deshabilitar buffering
        proxy_buffering off;
        proxy_cache off;
        
        # Headers adicionales
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Configuración principal (este apartado es parte de la configuración normal de nginx)
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        
        # Habilitar compresión para otras rutas (agregar)
        gzip on;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    }

gracias al debate aquí logré encontar la solución, luego de entender el funcionamiento viendo el video que mandó @vorsters en este debate, gracias a todos

1 Like

@Joss603 please provide more details about you setup.

By default, you should not use HTTPS and port 5678 at the same time unless your server is configured to do so.

If you are using n8n inside docker. Simple change localhost:5678 into 127.0.0.1:5678 in the MCP client tool and use Production URL (not test URL). This will fix the issue.

Thank you @Pablo_Kinniburgh and @Cecilio_Matos : your answers pointed th way in the right direction of my problem… but not all the way.
Sharing here another approach, in case anyone is hosting his or her n8n instance on elesti.io
elstio also seems to run NGINX and n8n in different containers.
I tried all the IPs / URLs given in the examples (127.0.0.1 ; host.docker.internal …), but that wouldn’t work. Until I finally stumbled across 172.72.0.1. as Docker’s “default bridge gateway”.
I am far from being an expert, but this IP seems to allow communication between services running in different containters.
Long story short: using the following NGINX config in the elest.io securitiy seetings was the first step to success:

# 1) General n8n traffic
    location / {
    proxy_pass http://172.17.0.1:5678;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_buffering off;
    proxy_cache off;
    chunked_transfer_encoding off;
    proxy_read_timeout 3600;
    proxy_send_timeout 3600;
    }

    # 2) SSE exception for MCP
    location /mcp/ {
    proxy_pass http://172.17.0.1:5678;
    proxy_http_version 1.1;
    proxy_set_header Connection '';
    proxy_set_header Host $host;
    proxy_buffering off;
    proxy_cache off;
    gzip off;
    proxy_read_timeout 3600;
    proxy_send_timeout 3600;
    }

The next step was NOT to simply copy the URL of the MCP server node to the client node, but to change the domain name also to the Docker default bridge gateway IP. So:

The URL in the Server Node settings:

https://DOMAIN.vm.elestio.app/mcp-test/GOOGLE_CAL_MCP_SERVER/sse

changes to he following URL in the SSE-setting of the client node:

https://172.17.0.1/mcp/GOOGLE_CAL_MCP_SERVER/sse

With these settings, the n8n MCP Server and Client node finally worked in the elest.io hosting environment.

If someone is using nginx proxy manager as their reverse proxy, this is what worked for me to get the mcp finally connecting to cursor in the Advanced tab of the proxy host config:

location /mcp/ {
    proxy_pass         $forward_scheme://$server:$port;
    proxy_http_version 1.1;
    proxy_set_header   Connection '';
    proxy_set_header   Host $host;

    proxy_buffering    off;
    proxy_cache        off;
    gzip               off;

    proxy_read_timeout 3600;
    proxy_send_timeout 3600;
}
1 Like

Hello,
After hours of testing I think i got the solution here.

I’m using Coolify and my approach was diving n8n services in three branch as you can see here:

  1. Editor
  2. Worker
  3. Postgres
  4. plus Redis

So I made a Docker compose with this stack and BOOM, everything is working now.

Consider this: the wild card domain was added later, i was on sslip.io before and I did not have HTTPS.

I achieved this by lunching n8n with Docker compose and this configuration.

version: '3.8'
services:
  editor:
    image: docker.n8n.io/n8nio/n8n
    environment:
      - SERVICE_FQDN_N8N_5678
      - 'N8N_EDITOR_BASE_URL=${SERVICE_FQDN_N8N}'
      - 'WEBHOOK_URL=${SERVICE_FQDN_N8N}'
      - 'N8N_HOST=${SERVICE_URL_N8N}'
      - 'GENERIC_TIMEZONE=${GENERIC_TIMEZONE:-Europe/Berlin}'
      - 'TZ=${TZ:-Europe/Berlin}'
      - DB_TYPE=postgresdb
      - 'DB_POSTGRESDB_DATABASE=${POSTGRES_DB:-n8n}'
      - DB_POSTGRESDB_HOST=postgresql
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=$SERVICE_USER_POSTGRES
      - DB_POSTGRESDB_SCHEMA=public
      - DB_POSTGRESDB_PASSWORD=$SERVICE_PASSWORD_POSTGRES
      - N8N_SECURE_COOKIE=false
      - N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - N8N_METRICS=true
      - N8N_RUNNERS_ENABLED=true
      - OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
    volumes:
      - 'n8n-data:/home/node/.n8n'
    depends_on:
      postgresql:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test:
        - CMD-SHELL
        - 'wget -qO- http://127.0.0.1:5678/'
      interval: 5s
      timeout: 20s
      retries: 10
    restart: unless-stopped
  worker:
    image: docker.n8n.io/n8nio/n8n
    environment:
      - 'N8N_EDITOR_BASE_URL=${SERVICE_FQDN_N8N}'
      - 'WEBHOOK_URL=${SERVICE_FQDN_N8N}'
      - 'N8N_HOST=${SERVICE_URL_N8N}'
      - 'GENERIC_TIMEZONE=${GENERIC_TIMEZONE:-Europe/Berlin}'
      - 'TZ=${TZ:-Europe/Berlin}'
      - DB_TYPE=postgresdb
      - 'DB_POSTGRESDB_DATABASE=${POSTGRES_DB:-n8n}'
      - DB_POSTGRESDB_HOST=postgresql
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=$SERVICE_USER_POSTGRES
      - DB_POSTGRESDB_SCHEMA=public
      - DB_POSTGRESDB_PASSWORD=$SERVICE_PASSWORD_POSTGRES
      - N8N_SECURE_COOKIE=false
      - N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - N8N_METRICS=true
      - N8N_RUNNERS_ENABLED=true
    volumes:
      - 'n8n-data:/home/node/.n8n'
    depends_on:
      postgresql:
        condition: service_healthy
      redis:
        condition: service_healthy
    command: worker
    restart: unless-stopped
  postgresql:
    image: 'postgres:16-alpine'
    volumes:
      - 'postgresql-data:/var/lib/postgresql/data'
    environment:
      - POSTGRES_USER=$SERVICE_USER_POSTGRES
      - POSTGRES_PASSWORD=$SERVICE_PASSWORD_POSTGRES
      - 'POSTGRES_DB=${POSTGRES_DB:-n8n}'
    healthcheck:
      test:
        - CMD-SHELL
        - 'pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}'
      interval: 5s
      timeout: 20s
      retries: 10
  redis:
    image: 'redis:7-alpine'
    restart: unless-stopped
    volumes:
      - 'redis-data:/data'
    healthcheck:
      test:
        - CMD-SHELL
        - 'redis-cli ping'
      interval: 5s
      timeout: 20s
      retries: 10
volumes:
  n8n-data: null
  postgresql-data: null
  redis-data: null

I’m not a big expert of this stuff, but trust me it works!

Hope this can help you guys :slight_smile:

Feel free to report me weird or wrong stuff there, I need to learn more!

Francisco

2 Likes

but if i have ssl and domin from hostinger starts with https how to fix above issue with MCP

Please, someone could help me? I`m trying n8n trial and i can`t resolve the “Error in sub-node ‘MCP Client’. I tried to fix the MCP server trigger URL but nothing worked
This is my workflow: n8n.io - Workflow Automation

To everyone here….

THANK YOU!

My local N8N running in Docker works after I replaced the production URLs for each of my 6 MCP in my workflow by using the following

From

https://[ngrok-url]/mcp/[guid]

To

http://localhost:5678/mcp/[guid]

and sometimes 
http://localhost:5678/mcp/[guid]/sse

This was driving me nuts. Thanks everyone!

A

Hello everyone,
I used this format and it worked fine for me: http://abc.abc.abc/mcp/{path}

My domain is available on HTTPS but I used http here to make it work.
In MCP client node I used HTTP Streaming.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.