AI Agent - Bypass Model Iteration After Tool Response

The idea is:

I would like to have an option where, once the node AI agent receives the response from the tool, it directly forwards the message to the output without sending the tool’s response back to the model for reanalysis.

My use case:

I have a use case where my AI Agent node sends a message to the MCP client tool. The response from the MCP server is a very large text, often exceeding 27 pages. Since the MCP server already returns exactly the information I need, I would like the AI Agent node to forward this response directly to the output, without passing it back to the model for further analysis or processing.

I think it would be beneficial to add this because:

The AI Agent will simply receive the response from the tool (e.g., MCP) and forward it directly to the output, without reprocessing it through the model. This avoids unnecessary latency, reduces token consumption, and helps prevent issues related to token limits when handling large responses.

Any resources to support this?

Are you willing to work on this?