Solving “Can’t Connect to the MCP Server” Error in n8n on CapRover
Context
I encountered the “can’t connect to the MCP server” error in n8n while running it on a CapRover deployment. Specifically, I had a workflow using an AI agent that called an MCP server (configured via the MCP Server Trigger in another workflow) through the MCP Client Tool. The error occurred consistently, preventing the workflows from communicating properly.
After some investigation, I suspected the issue was related to NGINX’s gzip compression interfering with Server-Sent Events (SSE), which the MCP server relies on. CapRover, which uses NGINX as its reverse proxy, seemed to be the culprit, but finding the right place to apply the fix was tricky.
Problem
The error message was:
McpClientTool: Failed to connect to MCP Server
The AI agent would enter a loop, unable to establish a connection with the MCP server. The issue appeared to stem from NGINX’s default configuration in CapRover, which likely applied gzip compression or lacked proper SSE support for the MCP routes. This caused the SSE connection to fail, resulting in the error.
Environment
- n8n Version: Latest (as of June 2025)
- Deployment: CapRover on a cloud server
- Operating System: Ubuntu (via CapRover’s Docker setup)
- MCP Setup: MCP Server Trigger in one workflow, MCP Client Tool in another
Solution
The fix involved customizing the NGINX configuration for the n8n app in CapRover to disable gzip compression and ensure proper SSE support for MCP routes. Here’s how I resolved it:
Steps
- Access the CapRover Dashboard: Log into your CapRover dashboard (e.g.,
https://captain.yourdomain.com
). - Edit n8n App NGINX Config:
- Navigate to the “Apps” section and select your n8n app.
- Find the “Edit Default Nginx config” option in the app’s settings.
- Add MCP-Specific Configuration:
- CapRover provides a default NGINX template with placeholders like
<%-s.publicDomain%>
. You need to add alocation /mcp/
block to handle MCP routes. - Insert the following block after the
location /
block but before the Let’s Encrypt and CapRover health check locations (/.well-known/acme-challenge/
and/.well-known/captain-identifier/
). - Here’s the modified portion of the NGINX config (only showing the relevant
server
block for brevity):
server {
<%
if (!s.forceSsl) {
%>
listen 80;
<%
}
if (s.hasSsl) {
%>
listen 443 ssl http2;
ssl_certificate <%-s.crtPath%>;
ssl_certificate_key <%-s.keyPath%>;
<%
}
%>
client_max_body_size 500m;
server_name <%-s.publicDomain%>;
resolver 127.0.0.11 valid=10s;
set $upstream http://<%-s.localDomain%>:<%-s.containerHttpPort%>;
location / {
<% if (s.redirectToPath) { %>
return 302 <%-s.redirectToPath%>$request_uri;
<% } else { %>
<% if (s.httpBasicAuthPath) { %>
auth_basic "Restricted Access";
auth_basic_user_file <%-s.httpBasicAuthPath%>;
<% } %>
proxy_pass $upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
<% if (s.websocketSupport) { %>
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
<% } %>
<% } %>
}
# Custom configuration for MCP routes
location /mcp/ {
gzip off;
proxy_pass http://<%-s.localDomain%>:<%-s.containerHttpPort%>;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# Used by Lets Encrypt
location /.well-known/acme-challenge/ {
root <%-s.staticWebRoot%>;
}
# Used by CapRover for health check
location /.well-known/captain-identifier {
root <%-s.staticWebRoot%>;
}
error_page 502 /captain_502_custom_error_page.html;
location = /captain_502_custom_error_page.html {
root <%-s.customErrorPagesDirectory%>;
internal;
}
}
- Save and Update:
- Paste the modified configuration into the “Edit Default Nginx config” field.
- Click “Save Configuration & Update” to apply the changes and restart the n8n app.
- Test the Workflow:
- Re-run the workflow with the MCP Client Tool and MCP Server Trigger. The error should no longer appear, and the connection should work.
Why This Works
- Disabling Gzip: The
gzip off;
directive prevents NGINX from compressing responses for MCP routes, which is critical because gzip can interfere with SSE connections. - SSE Support: Directives like
proxy_http_version 1.1;
,proxy_set_header Connection "";
,proxy_buffering off;
, andproxy_cache off;
ensure NGINX handles SSE properly, maintaining a persistent connection. - Timeout Settings:
proxy_read_timeout 3600s;
andproxy_send_timeout 3600s;
allow long-lived SSE connections to stay open.
Additional Notes
- Verify MCP Routes: If your MCP server uses different paths (e.g.,
/sse
or/webhook/*
), adjust thelocation /mcp/
path in the config accordingly. - EXECUTIONS_MODE: Ensure n8n’s
EXECUTIONS_MODE
is set toown
(notqueue
) for direct SSE management. You can check this in your n8n configuration (e.g.,.env
file or n8n settings). - Troubleshooting: If the issue persists, check n8n logs for detailed errors or verify the compiled NGINX config in CapRover’s Docker environment (
/captain/generated/nginx
). - Resources: I found inspiration in the n8n community posts (this thread) and CapRover’s NGINX customization docs (here).
Conclusion
This solution resolved the “can’t connect to the MCP server” error for me, and I hope it helps others facing similar issues with n8n on CapRover. If you have questions or run into problems, feel free to share details, and I’ll do my best to assist!
Happy automating!