The following is the json output of the chat module of a standard AI Agent node.
I’d like the output of my agent to include model_name and tokenUsage so that the user gets an immediate feedback about the cost of usage per message.
How do I access this data? I wasn’t able to find a suitable object in subsequent nodes or even in the system message of the agent node.
Thanks @mohamed3nan .
As I was looking for the actual used model (i.e. something like gpt-4.1-mini-2025-04-14) I took the execution data route but that ended really bad
Also, even worst, the variables which are available inside n8n where not available to me when sent to a hosted chat, which is very strange (I expected these to become regular text the second I used them).
There are a few steps you need to follow to make it work:
Get the execution ID of the workflow using {{$execution.id}}, and pass it to the n8n API.
In the workflow settings, make sure the Save Execution Progress option is set to Save (by default, it is set to “Do not save”). This is required to retrieve results from nodes while the workflow is still in progress.
The workflow must be Active (it won’t work in test mode).
Here is a working example of the workflow that retrieves the model/token details:
@mohamed3nan thanks a lot for this detailed answer. I think I did everything you suggested but I will make sure I didn’t miss anything.
In any case, I also reached a state where I can see the data in n8n. The weird thing is that it is not sent to the chat.
Hey @mohamed3nan , thanks again for the kind help! I believe we’re either on to a bug or I missed something else.
I used your workflow and changed the following things:
I added an output field to the last node so it will display the response in the chat
I made the chat public (and thus activated the workflow)
I configured Save Execution Progress as you suggested
Here’s my revised workflow and the screenshot of the results. You can see that the public chat doesn’t display the values in the chat box, but the built-in one does:
My best guess is that the “Save execution progress” process takes some time to complete in the background. When trying to manually retrieve the data, there’s no problem…
However, attempting to get the data while the process is still running seems unreliable…
With you on this, hope anyone takes a look on this…
@mohamed3nan I marked your reply as solution since this is how it should work. Miraculously it also started working in my hosted chat - so I guess this qualifies eitehr as a bug or at least as a result that relies on other parameters like max_runtime or latency or anything of that sort.
On to the next challenge - understanding the difference between the reported results in the execution data and the ones in the logs