Retrieving Token Usage from Summarization Chain

Describe the problem/error/question

I’m unable to retrieve Input, Completion Token usage from the llm model used in the Summarization Chain node

What is the error message (if any)?

When trying to reference the model’s output direct (which includes the token usage) I receive this error or empty references
image

Please share your workflow

Share the output returned by the last node

Ideally I should be able to log how much tokens I’ve used :face_with_raised_eyebrow:

Information on your n8n setup

  • n8n version: 1.27.2
  • Database (default: SQLite): n8n Cloud
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: Windows (my own) - Cloud I guess

Hi @manueltnc :wave: I got some confirmation from @oleg here - unfortunately it’s not possible to reference the execution data of sub-nodes at this point. There isn’t a simple way to do this currently :see_no_evil:

It’s possible to use n8n node to retrieve full execution data which also contains the sub-nodes data like tokenUsage, but that cannot be done in the same execution(as you need the executionId to retrieve already finished execution).

I’m going to move this over to the feature request forum for you, and hopefully that workaround will help you in the meantime!

Quick edit - this wouldn’t be necessarily a “pretty” way to do it, but this should do what you need for past executions:

2 Likes

Hey @EmeraldHerald :wave:

I found myself trying to the exactly the same but in my case using Basic LLM Chain and I’m not managing to do so. My runData only contains the following:

{
  "ai_languageModel": [
      [
          {
              "json": {
                  "response": {
                      "generations": [
                          [
                              {
                                  "text": "...."
                              }
                          ]
                      ]
                  }
              }
          }
      ]     
   ]
}

It doesn’t have any llmOutput to look for token usage. Any idea? thoughts?

Hi @Luciano_Penafiel :wave: Just to be sure we’re on the same page, you’re looking at the full execution data of a workflow and you’re getting that? If so, I wouldn’t know - but @oleg might :bowing_man:

Hi @EmeraldHerald thank you very much for your response! Hopefully @oleg can help me :crossed_fingers:

And yes, you are right. I’m looking at the full execution data of a workflow and I’m getting that.

Hi @Luciano_Penafiel, I’ve found a way to extract Token Usage within the execution making use of LangChain callbacks.

I’ve used LangChain documentation as reference: Tracking token usage | 🦜️🔗 Langchain

Takee a look at this example:

4 Likes

This is gold @miguel-mconf ! thank you very much

2 Likes

Please note that my solution is currently not working in version 1.40.0, as reported here: Token usage on Langchain nodes is no longer available in 1.40.0

2 Likes

Thanks for that @miguel-mconf . I’m in version 1.38.1 and it is still working on that version. I’ll keep an eye on your post. Thanks again!

1 Like

New version [email protected] got released which includes the GitHub PR 9311.

1 Like

I can confirm my solution is working again in version 1.42+.

1 Like

image
Hey @miguel-mconf, I tried copying your nodes over but I just got an error box. Is there a way to fix this? I’m using version 1.42.1

Hi to all!
Can someone help me with the solution, to make it work?

selfhost - 1.46.0

I believe this node is only available on self hosted n8n at the moment. Maybe you’re using on cloud?

Maybe you could share your workflow? Since the code is the same, I would believe something is wrong elsewhere.

Hi

i test it with gemini chat model but token is alwaya 0… i have no error…

any idea ?

Is there any way to make this work with an AI Agent node?

The Code node for “Extract Token Usage” doesn’t connect with the Chat Model in an AI Agent node.

I’d like to track token usage in an AI Agent, which it shows in the Chat Model sub-node, but it doesn’t display in the AI Agent main-node and I can’t find any way of extracting it from the sub-node.

Please post you full workflow code so we can check if there is something wrong elsewhere.

I believe this is a Bug. 3 of the AI Agents have a hardcoded list of input nodes, which not include the Langchain Code Node. Those are: “conversationalAgent”, “toolsAgent” and “openAiFunctionsAgent”. You can check the relevant code here:

I believe there is some reason for that, but “Langchain Code” should be on that filter list.

I actually found a simple workaround. You could select any other of the Agents types, such as SQL Agent, connect to the Langchain Code, and then change back to the AI Agent type you were using. The verification is not done this way, and the Token Usage Count works just fine. Or you could simply copy the workflow below:

If you remove the connection between the nodes, you can’t connect it back without changing the AI type.

@jan , could you check this out?

thanks
but still have 0 !!