How To Deal With GPT Rate Limits in AI Agent

I NEED HELP!
For the life of me I cannot figure out how to deal with GPTs rate limits. I am exceeding the per minute limit (it is 30k tokens and I am sending out between 31 and 40k). (this is on the first agent…it never makes it to the second agent)

Even when I split up the first agent and only have it use 2 tools, I am always just over the limit. The context is large, but well within GPT’s window, so I am not sure how to slow this down to get it to accept my inputs.

Any help would be greatly appreciated.

  • n8n version: 1.61.0
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: Windows 11

Hi @Anthony_Lee,

What tools are you using and what do they return? If each tool is returning a lot of data, that could be causing the rate limit issue and trying to reduce the data there could help.

There’s the OpenAI Cookbook’s Python notebook that explains how to avoid rate limit errors, and an example Python script for staying under rate limits while batch processing API requests.

But if that doesn’t work, then you may need to reach out to OpenAI to increase the rate limits for your account. https://platform.openai.com/docs/guides/rate-limits/

Thank you for replying. For some reason I didn’t get the email showing me this. Anyway, it turns out you just have to pay them. lol. It was because I was still on tier 1 with this client and we just needed to deposit $50.

Cheers

2 Likes

@Anthony_Lee, thanks for the update, good to know!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.