I am new to N8N. Have to say that it is a wonderful software. I use the cloud version for now but I run into a problem:
One HTTP node is being blocked by OpenAI API with a 429 response (Too many requests)
The thing is that I have multiple HTTP nodes pointing to OpenAI API, in the same Workflow, and all seem to be working fine. But one of them is not! And the Node is correctly configured, it was working fine, until suddenly it stopped.
I checked on OpenAI side, and I am not even reaching 1% of the limits for my tier.
Donât quite understand how that can happen inside one workflow, and always same node.
I only understand that IPs on cloud instances are shared and that a 429 response can happen in a shared environment.
I have 2 questions:
Is there a way to fix this? I remember reading somewhere that I can create a proxy tunnel, not sure if that is the solution, I would prefer avoiding it.
I would gladly pay for a separate IP for my cloud instance, to avoid all the problems related with sharing same IP. Is it possible and how much would it cost?
A 429 error basically means you have been rate limited by OpenAI on your account rather than the IP. There are multiple limits it could be like requests per minute or tokens per minute and it depends on the model so if you were using GPT-4 on Tier 1 you can make 500 requests a minute but only 10,000 tokens if it was the vision preview that is 80 per minute.
One thing you could try is setting the retry options in the node to see if that will help, If you set the http request node to output the full response you may see more information on which limit it thinks you are hitting as well.
I have this problem since yesterday day 25. Acording to OpenAI statistics yesterday I have made 76 requests to the API.
My limits in this account are for gpt-4:
tokens per minute 40.000
requests per minute 5.000
I am nowhere near the limits, and I have a 5 seconds WAIT Node setup to avoid reaching limits because of a infinite loop error.
I canât see any clue of what is really wrong. And my other HTTP nodes are working perfectly fine with OpenAI API, so I am not really blocked, IS ONLY THAT ONE NODE!!
How many items are you sending at once? I remember that when I started using n8n, this took me a while to understand: if you have 50 items going into a node, theyâre all sent at once and that can trigger a 429. Adding a Loop over Items node and a wait node inside the loop may help.
Yes I understand you, and thank you for the suggestion, but I am sending it once. I would have a small select box on top of the Output box saying something like RUN 1, 2, 3âŚ
It is not case.
I would gladly share the workflow but my first node has OpenAI keys and some others keys, and if I remove it, nothing will work. Maybe I can send the workflow privately to you @bartv ?
Anyway, I have just tried again to run the faulty Node and now the error is 401: Unauthorized: âMissing bearer or basic authentication in headerâ (the error seems obvious BUT is not, I should be Authorized, is just some kind of bug)
All the other HTTP Nodes connecting to other OpenAI API endpoints work fine.
I mean, I do consider my self a ânot-so-bright-personâ but this is completely insane. The Authorization header is there, the Organization header is also there, and they are correct. And most of all, the rest of the HTTP Nodes connecting to other Endpoints work just fine.
This must be something related to OpenAI API itself. Just in case, I run-ed a local N8N docker, to test, and is giving me same results so I guess I will just wait for OpenAI marvelous people to eventually realize that one endpoint has a problem.
OMG!!
Just solved it and is incredibly stupid!! This canât be happening!!
So the OpenAI endpoint is: https://api.openai.com/v1/threads/{{ OpenAIthreadID }}/runs
but I was using: https://api.openai.com/v1/threads/{{ OpenAIthreadID }}/runs/
That last â/â was making the request to fail !!
I just canât believe it I lost 20 hours, reading the entire internet, testing everything, and it was a simple â/â at the end of the url.