The service is receiving too many requests from you - Information Extractor Node with chatgpt

Describe the problem/error/question

What is the error message (if any)?

My workflow is failing due to an error encountered in the “Information Extractor” node. Here’s a summary of the situation:

  • Error Type: NodeOperationError

  • Error Message: “The service is receiving too many requests from you”

This error is typically caused by exceeding the rate limit imposed by a service you’re interacting with. In this case, it’s likely related to a node that makes requests to an external API or service, and it has received more requests than allowed within a certain time frame.

Please share your workflow

My worflow sends a scanned text to Info Extractor, which uses a Chatgpt Model - 5.4-nano.
I have a ChatGPT Plus plan.

Share the output returned by the last node

Information on your n8n setup

  • n8n version: latest
  • Database (default: SQLite): yes
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: macos

Hey @shinchan I guess the reason you’re encountering a rate limit error from OpenAI’s API is because your ChatGPT Plus subscription is for the ChatGPT web interface only and does not grant access to the OpenAI API, which operates on a separate pay-as-you-go system with its own rate limits.

Your ChatGPT Plus subscription $20/month is for using ChatGPT through their website or app. The n8n “Information Extractor” node uses the OpenAI API, which requires a separate API key and bills you based on usage pay-as-you-go.

Does this help?

1 Like

yup exactly. just note once you get the api key set up you’ll probably hit rate limits if you’re processing a lot of text. n8n can fire requests pretty quick. might want to batch them or add a delay between requests

Also just a heads up, theres no OpenAI model called “5.4-nano” so you might want to double check what model you actually have selected in teh node, its probably gpt-4o-mini. Once you get API billing sorted on platform.openai.com youll want to make sure the model name in n8n matches an actual model from your account

good catch on the model name — thats an easy one to miss. gpt-4o-mini is almost certainly what’s actually selected in the node, worth confirming that before spending more time on the rate limit side

Hi @shinchan , In case you have checked and found out @Miliaga response is true. You might want to check Gemini API as they come with free tier. You can set it up here, get the API Key, connect the credentials and start working on it.

A nice addition is each model has their own rate limit so if you are testing something and ran out of RPM, TPM, or RPD you can switch models. This of course, comes with the risk of getting different output. Give it a shot if you are in testing phase