Track LLM cost and traces from any n8n workflow with Torrix

I built a native Torrix node for n8n that logs every LLM call automatically. No code, no webhooks.

Drop the node into any workflow and you get:

  • Tokens and cost per step

  • Full prompt traces

  • End-to-end latency for the full run

  • Budget alerts before you hit your limit

Here is a support triage workflow as an example: install/demos/n8n at main · torrix-ai/install · GitHub

Self-hosted, one Docker command, free forever.

Happy to help anyone set it up.

Nice job — this looks really useful!

1 Like

Thank you :slight_smile: