How to ensure your input stays below 96,000 words?

Hi All

Today i get below error msg :-

This model’s maximum context length is 128000 tokens. However, your messages resulted in 162104 tokens (161963 in the messages, 141 in the functions). Please reduce the length of the messages or functions.

When i go to check my billing page :-1:

Does it mean that , after i top up the money , i can start using again ?

May i know where i can check which are the module consume more expense ? so that i am aware how to control my usage.

Hope some one can advise me.

Paul

‘tokens’ is referring to the amount of text the OpenAI chat model can accept. The error is basically telling you that you’re sending too much text, the OpenAI model you’re using only supports 128,000 tokens (which is roughly 96,000 words worth of text).

Whatever data you’re sending to the LLM is simply too large.

1 Like

Thank you very much for your reply. Appreciate you let me know it is OpenAI Issue and not n8n.

May I know how to change the setting ? So that I will be using less token from OpenAI .

Now I am not allow to send text , for example when I send text thru telegram input my prompt , I will get the error msg , when I click the link , it will redirect me n8n page to top up Euro $26.
However I am able to use telegram to send audio msg or send image , I have no issue.

Paul

@CameronDWills

I try to change the photo 0 instead of 3 , I still face the issue .

**Can you share with me how to reduce number token less then 128,000 ? **

Paul

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.