AI Agent: Rate limit reached error

For a store, I need to assign a product to the correct category.

I have a table with 1,500 rows, each containing a category.

I ask a ChatGPT agent to find the best category in the table based on the product’s name and description.

When I do this manually, GPT-4o always finds the best category.

However, when I integrate it into an n8n workflow, I encounter an OpenAI: Rate limit reached error with the GPT-4o model. I suspect this is due to the size of the data in my table.

It works with the GPT-4o-mini model, but the results provided by the AI are often less accurate.

Is there a better approach to categorize my products?

TY

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey @presta_melt , OpenAI rate limits are explained in https://platform.openai.com/docs/guides/rate-limits. It is tier based. The post also talks about possible mitigation of the errors.

TY, I’m aware of this.

But the WF raise the error with only one request…

Check the workflow logs to see what is really happening behind the scene and how the tokens usage is spread across the AI interactions.

1 Like