AI Agent Node - extract model usage

Hi !

Our current setup uses an AI Agent node connected to an OpenAI model.

Workflow test Cout - n8n - Google Chrome

Workflow test Cout - n8n - Google Chrome1310×837 178 KB

By default, the AI Agent node only returns the raw text from the model response.

What we need is to retrieve both:

  • the raw text output, and

  • the model usage information, such as completionTokens, promptTokens, and totalTokens (i.e. per-model usage metrics across our project).

Is there a way to extract or access the usage data for each model when using the AI Agent node?

Thanks !

The AI Agent node does not expose token usage in its standard output. Workarounds:

  1. Use an HTTP Request node to call the OpenAI API directly – the response includes a usage object.
  2. Check $('AI Agent').item.json.$metadata which may contain usage data in some versions.
  3. Use LiteLLM as a proxy in front of your LLM for centralized token tracking.