Separate internal agent reasoning/tool calls from streamed response output
Subcategory: node (AI Agent)
The idea is:
Add event type separation to AI Agent streaming output. When streaming is enabled, the output should distinguish between:
- “response” - Actual AI text meant for users
- “tool_call” - Internal tool invocation logs
- “reasoning” - Agent thinking/planning steps
This could be exposed via metadata.eventType in the streamed JSON objects, or as a node option: “Stream only final response.”
My use case:
I’m building a SaaS with an AI chatbot that uses multiple tools (vector database search, internet search, API calls, etc.). When tools are called, internal LangChain logs stream directly to users:
Calling fallback_search with input: {“input”:“user query here”}
This raw text appears in the chat mixed with actual responses, completely breaking the user experience. I’m forced to build fragile regex filters in edge functions to strip these logs - which breaks whenever the format slightly changes.
I think it would be beneficial to add this because:
- Production-ready streaming - Any chat application needs clean output without internal debug logs
- Event-driven handling - Developers can show tool-specific loading states, log calls separately, or hide them entirely
- API parity - OpenAI, Anthropic, and Gemini all separate tool calls from text in their streaming APIs
- Eliminates workarounds - No more regex filtering in middleware that breaks unpredictably
- Better debugging - Developers can still access tool calls and reasoning when needed, without polluting user-facing output
Any resources to support this?
- OpenAI Streaming API: Tool calls sent as
delta.tool_callsseparate fromdelta.content - Anthropic API: Uses
content_blockwith distinct type:"text"vs type:"tool_use" - Google Gemini: Separates
functionCallobjects from text parts - LangChain itself has callbacks that distinguish these events - n8n just needs to expose them
- Vercel AI SDK: Provides separate streams for text and tool calls
Are you willing to work on this?
Yes, I’m actively building with n8n AI Agents in production and happy to test any implementation. This is a blocker for anyone shipping real chat applications with tool-using agents. I can provide detailed feedback.