How to get token usage from OpenAI Assistant node in n8n (self-hosted)?

I’m using self-hosted n8n and the OpenAI node with Resource = Assistant (using an existing Assistant ID).
The workflow is working correctly and the assistant returns strict JSON output.

However, I need to track token usage (prompt / completion / total tokens) for each execution, mainly for monitoring and billing purposes.

Current situation

  • The OpenAI Assistant node output only returns the assistant response and thread_id

  • Token usage is not available in the node output

  • I have multiple OpenAI nodes in the same workflow

  • Replacing OpenAI nodes with HTTP Request nodes (Assistants API) works, but makes the workflow much more complex and harder to maintain

What I’m looking for

  • Is there a supported way to access token usage from the OpenAI node (especially Assistant resource)?

  • Is token usage stored internally in execution data and can it be accessed reliably?

  • Any recommended best practice from n8n for tracking OpenAI token usage without replacing the OpenAI node?

Environment

  • n8n: self-hosted

  • OpenAI node: Assistant resource

  • Output format: strict JSON

Any guidance or recommended approach from the n8n team would be really helpful.

Thanks!

Hey there you know what, Currently, the OpenAI Assistant node in n8n does not expose token usage in its output. Token data isn’t stored in execution data in a way that’s directly accessible. You can try this approach

  1. Use HTTP Request nodes for token tracking: This is the only way to reliably get prompt tokens, completion tokens, and total tokens from the Assistants API.

  2. Hybrid approach: Keep using the OpenAI node for simplicity, but add optional HTTP Request nodes only for logging token usage if needed.

  3. Track usage externally via OpenAI’s dashboard or usage API for monitoring/billing, and correlate with n8n executions via workflow/run IDs.