Current Behavior
When a Chat Model node is used as the Brain inside an AI Agent node:
-
The chat model’s response and metadata (e.g., token usage, intermediate steps) are not accessible from other nodes in the workflow.
-
Referencing them via expressions (e.g., {{ $json.tokenUsage }}) results in the error:
“No path back to the referenced node”
-
This makes it very difficult to extract or reuse important information such as tokenUsage outside of the Agent context.
Problem
-
Chat Model outputs (response, metadata, token usage, intermediate steps) are locked inside the Agent node.
-
Token usage only reflects the first run, not the cumulative usage across all runs inside the model.
-
Workarounds like schema reformatting or extra intermediate nodes don’t fully solve this.
Proposed Solution
-
Make Chat Model outputs globally accessible across the workflow, even if the model is embedded inside an AI Agent node.
- Example: allow expressions like {{ $(‘Chat Model’).json.tokenUsage }}.
-
Expose intermediate steps in the output parser automatically, so they can be inspected and reused (not just hidden in execution logs).
-
Ensure token usage reflects cumulative totals (across all model runs within an Agent), not just the first run.
Use Case
-
Capture and log total token usage for monitoring and cost control.
-
Feed intermediate steps + token usage into analytics pipelines.
-
Build dashboards for per-user or per-workflow AI costs.
Benefits
-
Increases transparency and monitoring of AI workflows.
-
Reduces reliance on fragile workarounds.
-
Makes Chat Model behavior consistent with other node outputs that are accessible globally.