The idea is:
Add token usage output (input tokens, output tokens, total tokens) to the AI Agent node and Chat Models subnode, exposing these values as referenceable fields in subsequent nodes.
My use case:
I’m building an AI-powered SaaS with usage-based billing. I need to:
Track exact token consumption per user request
Store token usage in my database for billing calculations
Calculate costs dynamically based on model-specific pricing
Display real-time usage to users
Currently I have to estimate or use external API calls to get this data.
I think it would be beneficial to add this because:
1- Billing accuracy - SaaS builders need precise token counts for usage-based pricing
2- Cost monitoring - Track spending across different AI models
3- Workflow optimization - Identify expensive prompts and optimize them
4- Native solution - Avoids workarounds like separate API calls or middleware
Any resources to support this?
OpenAI API returns usage object with prompt_tokens, completion_tokens, total_tokens
Anthropic returns usage with input_tokens, output_tokens
Google Gemini returns usageMetadata with token counts
All major LLM providers already return this data - n8n just needs to expose it.
Are you willing to work on this?
Yes, happy to test and provide feedback.