10x more tokens consumed with AI Agent "source fromp prompt" is "Define Below"

  • n8n version: 1.101.1
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: n8n cloud

Kindly reference attached screenshot, when AI agent is “Define below” to understand user input, it consumes 5245 tokens. But when source for prompt is “Connected Chat Trigger Node”, it only consume 491 tokens. It’s over 10x more tokens consume and not cost effective.

i have to use “Connected chat trigger node”, reason because i figured out that’s the only way to le the bot understand attached content

e.g. Source for Prompt as below
"your message {{ $json.chatInput }}
your attached {{ $(‘When chat message received’).first().binary.data0.data }}"

hope for some suggestion. I have couple of chat request cost over 100k tokens, hard to proceed with my test

It depends on your memory for the conversation, also the tools that used by yours, and the response also.
You can set Maximum tokens with Maximum number of tokens options.

i think u get me wrong. my question more on why it consume 10x with same question and all other setting remain unchanged except AI agent’s prompt toggle from “Chat Trigger” into “Define below” where receive from same node

As what I said before, the tokens depend on how long the response from AI, your memory, and also your tool that you used to.

1 Like

again thanks Cute Kitty, even i aint sure why, but i rebuild my nodes and now seems working fine :smiley:

thanks!

cool, sorry if my answer wasn’t match with your problem:)
have a nice day!