Bad Request Error: Input tokens exceed configured limits in OpenAI node

Hi everyone,

I’m building an AI + ManyChat workflow in n8n and running into an issue. I keep getting this error:

‘Bad request, please check your parameters. Input tokens exceed the configured limits. Your messages resulted in 298,000 tokens, but the limit is 298810 tokens. Please reduce the length of messages.’

I was just testing by sending a simple message like ‘My order number is…’ and still got this error. I’m not sure why the token count is so high or how to fix it.

Has anyone experienced this before? Any advice on how to resolve this would be greatly appreciated.

Thanks!

1 Like

298k tokens on a simple test message means your memory node is probably using the same session ID for every user so all conversations are piling up into one giant context. Set the session ID to something unique per ManyChat subscriber like {{ $json.subscriber_id }} and drop the context window length to like 10-15 messages, that should fix it.

Hi @Samar_Abbas!
Wrong model, use GPT-4o.

For your use case.

For Chat memory use Supabase , n8n’s chat memory is good but not as good as a dedicated data base.