Describe the problem/error/question
When I run the OpenAI - Completion node with 1K rows then I get a “Too many requests” error.
I understand that indeed OpenAI has some rates limits. But I hoped that the Node would manage it by default, like in LangChain or Make.com.
What is the best way you would advise to beautifully handle those rate limits?
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
Information on your n8n setup
n8n version: ai-beta
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
- Operating system:
At the moment we don’t have anything to handle rate limits nicely so it would be a case of splitting your data up into smaller chunks using the loop items node and adding a wait at the end for a second or whatever the limit may be so something like the below should work if you are on the tier 1 limit.
Thanks a lot @Jon for quick help!
I see! Thank you for clarity that the best option would be a loop.
I used the HTTP batch-limit built-in option, by using the OpenAI API instead of the OpenAI node. Seems that it would have the same result than the Loop while simplifying the workflow.
That would also do the job nicely.
Thanks you so much for your answers and support, very helpful
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.