AI Usage Analysis (could be node, could be UI)

In my specific case i’m using chatgpt so i’ll be using that terminology, it would be helpful to have this work for all AI api calls to different models.

Suggested upsell:

Get these stats with base version in the UI (but dont make it too generic otherwise theres 0 value in it, IMO still should get at least workflow level analytics).
Enterprise - ability to get this data per execution and log it to your own database, deeper granularity - if multiple API calls inside 1 workflow, break it down to the next layer.

It would help if there was a node for:

Unclear if this should be a node, but it feels like having to update all other nodes to include this information is more difficult on the dev side.

Alternatively this could be something in the native n8n UI and would be a cool feature (broken down by agents, or even agent->nodes)

Tracking token usage on the last AI api call, for example chatgpt agent runs and we get:

“usage”: {
“prompt_tokens”: 125,
“completion_tokens”: 42,
“total_tokens”: 167
}

My use case:

I will be doing alot of different AI api calls with what we are building. I need to keep track of costs, and eventually will want to optimize that.

Any resources to support this?

https://api.openai.com/v1/chat/completions

Are you willing to work on this?

Its not urgent, but it will be once we start having higher usage. we are in development now, if there is not something like this by the time we have a live application I would probably have to do a workaround (http request to openai)

There’s already a standard way to obtain that, at least for the “OpenAI” node. It’s called “Simplify Output” and it’s toggled ON by default.
Switching it OFF will make the node return usage statistics data along with the completion result.

Awesome, thanks! Can delete this thread if you need then. Thats very useful for my analytics.