Access an agent's chat model output in subsequent nodes

The following is the json output of the chat module of a standard AI Agent node.
I’d like the output of my agent to include model_name and tokenUsage so that the user gets an immediate feedback about the cost of usage per message.

How do I access this data? I wasn’t able to find a suitable object in subsequent nodes or even in the system message of the agent node.

[
  {
    "response": {
      "generations": [
        [
          {
            "text": "TRUNCATED OUTPUT",
            "generationInfo": {
              "prompt": 0,
              "completion": 0,
              "finish_reason": "tool_calls",
              "system_fingerprint": "fp_...41f",
              "model_name": "gpt-4.1-mini-2025-04-14"
            }
          }
        ]
      ]
    },
    "tokenUsage": {
      "completionTokens": 12,
      "promptTokens": 4071,
      "totalTokens": 4083
    }
  }
]

sample workflow:

core

  • n8nVersion: 1.90.1
  • platform: docker (self-hosted)
  • nodeJsVersion: 20.19.0
  • database: postgres
  • executionMode: regular
  • concurrency: -1

Generated at: 2025-05-16T11:28:08.132Z

1 Like

Hi @Zohar

To get the model name, you can try this expression:

{{ $('OpenAI Chat Model').params.model }}

However, there’s no direct way to retrieve the tokenUsage

As a workaround, you can connect an n8n node and fetch the execution data, which includes that information.

You can check out this solution:

2 Likes

@Zohar

The only way I know is to use a code step instead of the ai agent step. Like this.

1 Like

I’m really surprised. I’ll see if there’s already a feature request about this.

1 Like

Thanks @mohamed3nan .
As I was looking for the actual used model (i.e. something like gpt-4.1-mini-2025-04-14) I took the execution data route but that ended really bad :man_facepalming: :smiley:

See below:

Also, even worst, the variables which are available inside n8n where not available to me when sent to a hosted chat, which is very strange (I expected these to become regular text the second I used them).

For now I decided not to get blocked by this. I hope to get back to debugging this sometime soon.

1 Like

Hi @Zohar

There are a few steps you need to follow to make it work:

  • Get the execution ID of the workflow using {{$execution.id}}, and pass it to the n8n API.
  • In the workflow settings, make sure the Save Execution Progress option is set to Save (by default, it is set to “Do not save”). This is required to retrieve results from nodes while the workflow is still in progress.
  • The workflow must be Active (it won’t work in test mode).

Here is a working example of the workflow that retrieves the model/token details:

Feel free to edit it to suit your needs.


:point_right:If this solves your problem, kindly mark my reply as the solution :white_check_mark::pray:t2:

1 Like

nice this is gold :slight_smile:

1 Like

@mohamed3nan thanks a lot for this detailed answer. I think I did everything you suggested but I will make sure I didn’t miss anything.
In any case, I also reached a state where I can see the data in n8n. The weird thing is that it is not sent to the chat.

To be more exact:

  • The data IS sent to the built-in chat
  • The data is empty when sent to the hosted chat

Hey @mohamed3nan , thanks again for the kind help! I believe we’re either on to a bug or I missed something else.

I used your workflow and changed the following things:

  • I added an output field to the last node so it will display the response in the chat
  • I made the chat public (and thus activated the workflow)
  • I configured Save Execution Progress as you suggested

Here’s my revised workflow and the screenshot of the results. You can see that the public chat doesn’t display the values in the chat box, but the built-in one does:

Hi @Zohar

Yes, there is indeed an issue, it also appears to be bugged in the built-in chat:

My best guess is that the “Save execution progress” process takes some time to complete in the background. When trying to manually retrieve the data, there’s no problem…

However, attempting to get the data while the process is still running seems unreliable…

With you on this, hope anyone takes a look on this…

@mohamed3nan I marked your reply as solution since this is how it should work. Miraculously it also started working in my hosted chat - so I guess this qualifies eitehr as a bug or at least as a result that relies on other parameters like max_runtime or latency or anything of that sort.

On to the next challenge - understanding the difference between the reported results in the execution data and the ones in the logs :expressionless:

Thanks!


1 Like

Hi @Zohar

What do you mean by this:

Actually, it’s still bugged on my end, it works randomly :smiley:

ha ha :smiley: !
I created a bug report: Execution data is randomly available in output response · Issue #15545 · n8n-io/n8n · GitHub
Hopefully it’ll get caught.

1 Like

Thanks for confirming it!

Feels good to be part of the bugs club :joy:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.