Global Access to Chat Model Outputs (Token Usage, Metadata, etc.)

Current Behavior

When a Chat Model node is used as the Brain inside an AI Agent node:

  • The chat model’s response and metadata (e.g., token usage, intermediate steps) are not accessible from other nodes in the workflow.

  • Referencing them via expressions (e.g., {{ $json.tokenUsage }}) results in the error:

    “No path back to the referenced node”

  • This makes it very difficult to extract or reuse important information such as tokenUsage outside of the Agent context.

Problem

  • Chat Model outputs (response, metadata, token usage, intermediate steps) are locked inside the Agent node.

  • Token usage only reflects the first run, not the cumulative usage across all runs inside the model.

  • Workarounds like schema reformatting or extra intermediate nodes don’t fully solve this.

Proposed Solution

  1. Make Chat Model outputs globally accessible across the workflow, even if the model is embedded inside an AI Agent node.

    • Example: allow expressions like {{ $(‘Chat Model’).json.tokenUsage }}.
  2. Expose intermediate steps in the output parser automatically, so they can be inspected and reused (not just hidden in execution logs).

  3. Ensure token usage reflects cumulative totals (across all model runs within an Agent), not just the first run.

Use Case

  • Capture and log total token usage for monitoring and cost control.

  • Feed intermediate steps + token usage into analytics pipelines.

  • Build dashboards for per-user or per-workflow AI costs.

Benefits

  • Increases transparency and monitoring of AI workflows.

  • Reduces reliance on fragile workarounds.

  • Makes Chat Model behavior consistent with other node outputs that are accessible globally.

Any resources to support this?

Are you willing to work on this?

This is mandatory if one wants to use n8n to do real business and not just ttoy with it! Cudos to you for having said it before anyone else, nand let’s hope they implement this feature soon enought!
It is frustrating, but I think that Log Streaming (offered with the Enterprise plan) is currently the only reliable way of monitoring token usage