Hey !
Our current setup uses an AI Agent node connected to an OpenAI model.
By default, the AI Agent node only returns the raw text from the model response.
What we need is to retrieve both:
the raw text output, and
the model usage information, such as completionTokens, promptTokens, and totalTokens (i.e. per-model usage metrics across our project).
Is there a way to extract or access the usage data for each model when using the AI Agent node?
Thanks !
Hi @EricK_Daniel_RANDRIA
Welcome to the n8n community !!
Usage metrics are only available in the model’s direct response. Since the AI Agent normalizes the output and does not expose this information, the only supported way to access usage metrics is to use the OpenAI Chat Model node directly.
hello @EricK_Daniel_RANDRIA , welcome!
At the moment, there is no native/direct method to get the ai model token usage,
Since this topic has been discussed a lot here, there is a workaround using the n8n API to retrieve these values from the execution data,
Here is the workflow:
Also worth checking out:
I actually solved the problem by adapting @solomon ‘s solution for my case.
What I did was simply adding two nodes right after the Agent node (Edit Fields > HTTP request, bcs I was having some trouble with the Execute Sub-workflow node), as you can see below:
[image]
The Edit Fields node just filters the execution_id at the active workflow using {{ $execution.id }}.
The HTTP request node does a POST to {your_n8n_url}/webhook/log-tokens, with the following JSON body (in MY case, I also send …
1 Like
Hi @EricK_Daniel_RANDRIA There is no official way currently to do that, but a kind of funny workaround or somewhat you can do is something like this:
This might work, so cheers!