Live output of the tools that AI Agents are Calling (with separate output or call sub-workflow)

The idea is:

Create a agentic tool output, or an additional setting to allow tool calls within AI agents to have a separate workflow output or live notification (webhook or even call a separate workflow). In the latest guidelines published by Linear, AI agents should have transparency as to the internal tools being called, and one drawback to using n8n within agentic workflows is the lack of live transparency as to what tools are being called. This is present in tools like chatGPT, and other leading LLM providers’ primary UI.

My use case:

I run a marketing agency, and have a suite of multiple AI agents we’re working with. We usually use slack, but whenever we want to use our n8n AI Agents, there’s a distinct lack of transparency as to what tools are being called, which leaves a hole in terms of general purpose use-cases and customer-facing agentic tools.

I think it would be beneficial to add this because:

I want to be able to diagnose issues without directly going into n8n, and furthermore, be able to see how workflows are operating, especially if I’m using custom UI’s or tools like slack.

Any resources to support this?

Linear AI Agent guidelines on transparency → Agent Interaction Guidelines (AIG) – Linear Developers

Are you willing to work on this?

More than happy to help provide feedback, and user experience guidance. Got a little bit of a coding background but not that much lol.

Exactly the issue I am having. Thanks for bringing this up.

I have created a PR → feat: Stream AI agent tool calls and node execution via SSE by sarahsimionescu · Pull Request #20499 · n8n-io/n8n · GitHub

Would love to have official support from the n8n team on this!

2 Likes

Is there a community node in mind before is beeing this approved and implemented?

I consider this very necessary for building a smoothly functioning agent that can collaborate effectively in a chat format, without leaving the user without a response or reaction for a long time.

The way the workflow with sub-agents is structured will create a “thinking” mechanism that can be shared with the user.

I’m eagerly awaiting the reactions.