Sum Total OpenAI Token Usage Across Entire Workflow Using Summarize

Describe the problem/error/question

I’d like to try and sum the total number of prompt tokens, completion tokens, and total tokens used across my workflow. My workflow includes multiple loops as well as executing a sub workflow that includes OpenAI calls itself in it.

What is the best way to collect and sum? My initial thought would be to use the summarize node, but that appears to only collect information from the prior node, but not all nodes prior to it.

I’m sure I’m missing something painfully obvious. Any help would be greatly appreciated.

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Ok, I’m sure there is a better way to do this, but I ended up creating an AWS Lambda function that the OpenAI nodes call with a payload for the tokens they used and then the lambda updates (sums) the values in a DynamoDB table. It’s nice as the results persist and are easy to query later outside of n8n for more advanced analytics.

That said, I’d still like to understand the best n8n native way to do this. All suggestions welcome!

Hi @workflowsy! I really like the lambda function approach to store the values in the DynamoDB table. Very cool.

Here’s an iteration on your first workflow attempt. It’s not pretty, but it does handle everything in-app. The key takeaways is that for each new OpenAI node, you’d also need to add an extra Edit Fields (Set) node, a Merge node, and a Code node for summarizing the token outputs without updating the key.

@Ludwig - thank you so much for putting this together. I think what I struggle with with something like this is it just feels like they’re got to be a better way. I like n8n so much more than other platforms like Zapier and Make, but both offer much easier (native) ways to accomplish the same thing and it just leaves me scratching my head sometimes that n8n is so powerful yet feels so difficult to do things with like this.

I really feel like an in-n8n data store (not just writing out to a text file) would address so much of this headache if you could do some sort of append / update method. That or, even just like the ability to have a variable that you set earlier be able to be updated / appended to later in the workflow.

Either way, not going to detract me away from n8n and I seriously appreciate you taking the time to work through this, but man, I wish there was an easier way.

2 Likes

I think it’s definitely fair to say that there is probably an even better way. I could imagine utilizing a shared tool across the AI nodes, a series of variables that can be incremented upon, etc. All potentially better approaches, but I wanted to show one option here.

There are some other approaches that community members have used with other AI nodes. You could potentially get inspiration from those?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.