MCP token efficiency

Hey,

Im struggling with the relatively new MCP feature (calling workflows with LLMs and custom webhook requests). It works perfectly fine with simple workflows, where the LLM can easily read an Agents output or maybe even 3 or 4 of them.

I got a more complex workflow tho, which i want to use with the MCP feature using Claude (installable Version). When i call the workflow with a custom webhook by a message to Claude, the first few nodes work fine but i have a lot of HTTP Request and scraping nodes in my Workflow and 1-2 API calls. The output of the Workflow is just a small JSON array with some Information, but Claude tries to read and summarize or understand every single HTTP Request and Node Output. After 1-2 HTTP Requests it gets stuck on always summarizing chat because the nodes output and Claudes input is to large.

I only want to call the Workflow and get the output of the last node, i dont need claude to see every single nodes output using mcp.

Is there an upcoming (or existing) solution for that problem. A perfect solution for me would be an option to let Claude call and execute the workflow but only be able to see the last nodes (or selected nodes) outputs and know when the workflow has finished executing. Is that possible?

Regards

Welcome to the community :tada:

I haven’t heard any information information about a built in n8n setting for controlling which node outputs are sent to an LLM in MCP. The key thing is to modify your workflow so its final output for Claude contains only the essential data.

I know, but the final output IS perfect for claude as already said it’s only a small json array. But that doesnt help me because as already said EVERY output gets picked up by the LLM. If theres really no workarounds for that it’s a critical feature to add in the future to make n8n actually accessible in the future with mcps.

Yes, the instance-level MCP should only include final output, not raw output and metadata from every workflow step. This is wildly inefficient from a token usage perspective, potentially confusing for the LLM, and has worrying security implications. It should be classified as a bug.

Bump this is a really important bug, otherwise the mcp feature is close to useless…

Hello please watch this topic admins