Need Help: MCP Server Trigger Not Working (curl: (18) error)

Hi everyone,

I’m having trouble getting the MCP Server Trigger node to work properly — even with a minimal setup.


Steps to Reproduce:

  1. Add an empty MCP Server Trigger node to a new workflow
  2. Copy the Test URL
  3. Click “Test Step” so the node starts listening
  4. Run the following command: curl <Test URL>

Expected Behavior:

I expected the connection to remain open or stream messages (since it’s SSE-based).


Actual Result:

After a long pause, I received this output:

event: endpoint
data: /mcp-test/4590a747-4dbc-48cd-b651-60a7d9a0a83c/messages?sessionId=66e7c2f4-c99d-41d6-bb7a-0da3f8c5833f

curl: (18) transfer closed with outstanding read data remaining

What I’ve Tried:

  • Re-tested multiple times from scratch
  • Tried bypassing Nginx/SSL using curl http://127.0.0.1:5678/mcp-test/...

Still received the same curl: (18) error.

Question:

Has anyone successfully used the MCP Server Trigger with SSE?
Am I missing a configuration step to keep the connection open or respond properly?

Any help would be greatly appreciated :pray:

Thanks!


My Setup

  • n8n version: 1.89.2
  • Database: PostgreSQL
  • Running via: npm
  • OS: Debian
1 Like

Hello Tungsten.
I hope you are well!

Did you create the MCP SERVER and then create your workflow with the MCP Client?
Pasting the MCP SERVER test URL into your workflow with the MCP Client?

I ask this because regarding your question we have the following scenario:

The error curl: (18) transfer closed with outstanding read data remaining is a classic symptom that curl, by default, is not correctly waiting for the continuous streaming nature of Server-Sent Events (SSE) that the MCP Trigger uses

You receiving the event event: endpoint is a good sign: it shows that the initial connection is established and N8N sends the first SSE control message. The problem is that, shortly after, curl interprets that the transfer has finished (or the server has closed it), even though the SSE logic is to keep the connection open

Curl needs to be instructed not to wait for a traditional end-of-file and to process the data as it arrives. The -N or --no-buffer flag is made for exactly this.

In your terminal, run the curl command adding the -N flag.

bash curl -N Replace with the test URL copied from the MCP Trigger node.

After running this command (and with N8N in “Listening…” mode in the “Test Step”), the connection should remain open in your terminal, waiting for new events that N8N can send. You will not see curl exit immediately after the endpoint event. To stop it, you will need to use Ctrl+C.

There may be some clues in the N8N logs, so check the logs of your N8N Docker container or the N8N service right after trying to connect with curl -N. Look for errors or messages related to MCP Trigger or the connection termination.

Hope this helps.
Best regards

Hi @interss,

Thanks for your quick response.

I tested the MCP Server node by dropping it into a workflow and using curl, expecting that the “Test Step” feature would provide some indication (like other trigger nodes). However, there was no reaction.

Based on your suggestion, I ran a few more tests:


:white_check_mark: Test A: Using curl

Command:

curl -N <sse_url>

Result:

event: endpoint
data: /mcp-test/f5b28055-b062-463a-a02c-ee1b72d25a00/messages?sessionId=eb39a8a9-4a63-40ed-9dcb-7d3bab0f2bf9

curl: (18) transfer closed with outstanding read data remaining

:white_check_mark: Test B: Using UI tool (hoppscotch.io)

Steps:

  • Go to Realtime → SSE
  • Enter the SSE URL
  • Start connection

Result:

Error: This browser doesn't seem to support Server-Sent Events

:white_check_mark: Test C: n8n Internal Workflow Test

  1. MCP Server Workflow:
  • Contains a simple getDate tool node
  • Activated and running
  1. MCP Client Workflow:
  • Uses an AI Agent node
  • MCP Client node configured with the Test URL
  • Triggered via Chat

Result:

[ERROR: Error in sub-node MCP Client]
Could not connect to your MCP server



:mag: n8n Logs (with debug enabled)

20:40:48.079   debug   Deleting transport for ab4da40d-55b6-4158-84aa-5fb202215625 { "file": "McpServer.js" }
20:40:48.080   debug   Closing MCP Server { "file": "McpServer.js", "function": "server.onclose" }

I’m unsure if the MCP Server is functioning as expected, or if I’ve misconfigured something on my end. Would appreciate any clarification!

Thanks again for your support :pray:

Update with workflow’s code

Workflow with MCP Server

Workflow with MCP Client

You have configured an MCP Server trigger with the path f5b28055-b062-463a-a02c-ee1b72d25a00.

The endpoint defined in the MCP Client (https://workflow.imtung.com/mcp/f5b28055-b062-463a-a02c-ee1b72d25a00/sse) may be incorrect (structure or connection).

Fix these points, restart each node, then run the workflow. Remember to save and activate the workflow

Update: Problem Solved :white_check_mark:

I’ve found the root cause and got it working!

Initially, I tested the MCP Client using a direct local address:

http://127.0.0.1:5678/mcp/7038225b-592b-428e-947c-d8889f54ee41/sse

The connection worked perfectly — which led me to realize the issue was with the Nginx reverse proxy setup.

After tweaking the Nginx config and adding the following SSE-specific settings, the full HTTPS URL also worked:

# SSE-specific fixes
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;

# Prevent buffering & compression
gzip off;

Turns out it was a simple proxy buffering issue — silly me for not checking sooner.

Thanks again for your support and patience!

5 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.