I would like to extract the completition and prompt tokens info when using AI Agents. It would be highly beneficial to keep track of my costs.
I have tried different workarounds presented here Similar post 1 and Similar post 2 but they don’t work for AI Agents. There is no direct way to do it with N8N nodes.
Please let me know if anyone has managed to successfully retrieve this tokens info.
Joining the request. I have no idea how to count app usage per user.
Even though its number is shown in the Chat Model section, unclear how to get it from there.
Same here. This isn’t apart of the AI agent node from what I’ve seen… However, I think it should be apart of it. It could be handled similarly to the “tools” leg. Maybe it should be a “log” leg or similar. For instance, I would like to know the input/output tokens per request or maybe some logs around each request.
Voting up here too. For many use cases it would make a real difference knowing the token usage. For some projects I just dropped the default llm model node or even the agente for that matter
Yes, this is a very important feature for those looking to control the operational costs of the flow or create a log of what the client is consuming in tokens. The use of AI is closely related to performance and efficiency, and through tokens, we can make many optimizations.
Have we seen the n8n team providing feature requested by the community in the past?
I mean I’m using n8n only since a few months, and off course it has its limitations, but I don’t know what’s the level of responsiveness of the devs. In your experience, does it append? Often? And on what kind of delay on average?
I want to second this request as well, but maybe take it a bit simpler:
You can use an LLM proxy such as LiteLLM to do a lot of things like measuring usage, associating costs etc.
I think with the focus shift of n8n from an integration platform to an AI platform, having any sort of observability will make n8n increasingly valuable in the enterprise tier as well.
Consider integrating with Datadog LLM observability.
It should be relatively straightforward to collect token usage from an AI agent and have that be exposed as metrics, that would at least be the simplest form.
It is worthy to be noted here that n8n can send all data to LangSmith which can track costs, and we are using it that way.
However, the downside here is: I have no idea how to distinguish data by workflow or by any other thing except for the LLM model.
Basically it is tracking “everything per n8n instance”.