Please help with IP blocked problem (429 too many requests)

I am new to N8N. Have to say that it is a wonderful software. I use the cloud version for now but I run into a problem:
One HTTP node is being blocked by OpenAI API with a 429 response (Too many requests)

The thing is that I have multiple HTTP nodes pointing to OpenAI API, in the same Workflow, and all seem to be working fine. But one of them is not! And the Node is correctly configured, it was working fine, until suddenly it stopped.

I checked on OpenAI side, and I am not even reaching 1% of the limits for my tier.

Don’t quite understand how that can happen inside one workflow, and always same node.
I only understand that IPs on cloud instances are shared and that a 429 response can happen in a shared environment.

I have 2 questions:

  1. Is there a way to fix this? I remember reading somewhere that I can create a proxy tunnel, not sure if that is the solution, I would prefer avoiding it.
  2. I would gladly pay for a separate IP for my cloud instance, to avoid all the problems related with sharing same IP. Is it possible and how much would it cost?

Hey @Cristian_DF,

Welcome to the community :cake:

A 429 error basically means you have been rate limited by OpenAI on your account rather than the IP. There are multiple limits it could be like requests per minute or tokens per minute and it depends on the model so if you were using GPT-4 on Tier 1 you can make 500 requests a minute but only 10,000 tokens if it was the vision preview that is 80 per minute.

One thing you could try is setting the retry options in the node to see if that will help, If you set the http request node to output the full response you may see more information on which limit it thinks you are hitting as well.

I have this problem since yesterday day 25. Acording to OpenAI statistics yesterday I have made 76 requests to the API.
My limits in this account are for gpt-4:
tokens per minute 40.000
requests per minute 5.000

I am nowhere near the limits, and I have a 5 seconds WAIT Node setup to avoid reaching limits because of a infinite loop error.

The Response HEADERS are:

date:Fri, 26 Jan 2024 08:36:40 GMT






0:__cf_bm=RE3dmNnR.MAvSihrp0VYycxylY60btndXy4w0SzdDGz8-1706258200-1-AdmPyibi2UequysXKpJvARrjW5jpTVtPf3BzMzjXnfLdDIUEAEKICYyctllrcs6A5xWzQ1QecEwbRIwl64xGADUFY=; path=/; expires=Fri, 26-Jan-24 09:06:40 GMT;; HttpOnly; Secure; SameSite=None

1:_cfuvid=xqjtA_R2N5ByMj_ogSRTYKEWQLbrEyC7PFsSG01RFhM-1706258200867-0-604800000; path=/;; HttpOnly; Secure; SameSite=None




alt-svc:h3=":443"; ma=86400

statusCode: 429

statusMessage: “Too Many Requests”

body: [empty object]

I can’t see any clue of what is really wrong. And my other HTTP nodes are working perfectly fine with OpenAI API, so I am not really blocked, IS ONLY THAT ONE NODE!!

How many items are you sending at once? I remember that when I started using n8n, this took me a while to understand: if you have 50 items going into a node, they’re all sent at once and that can trigger a 429. Adding a Loop over Items node and a wait node inside the loop may help.

Yes I understand you, and thank you for the suggestion, but I am sending it once. I would have a small select box on top of the Output box saying something like RUN 1, 2, 3…
It is not case.

Hmmm, perhaps you can share your workflow here so we get a better idea of what you’re doing?

I would gladly share the workflow but my first node has OpenAI keys and some others keys, and if I remove it, nothing will work. Maybe I can send the workflow privately to you @bartv ?

Anyway, I have just tried again to run the faulty Node and now the error is 401: Unauthorized: “Missing bearer or basic authentication in header” (the error seems obvious BUT is not, I should be Authorized, is just some kind of bug)

All the other HTTP Nodes connecting to other OpenAI API endpoints work fine.

I mean, I do consider my self a ‘not-so-bright-person’ but this is completely insane. The Authorization header is there, the Organization header is also there, and they are correct. And most of all, the rest of the HTTP Nodes connecting to other Endpoints work just fine.

This must be something related to OpenAI API itself. Just in case, I run-ed a local N8N docker, to test, and is giving me same results so I guess I will just wait for OpenAI marvelous people to eventually realize that one endpoint has a problem.

Just solved it and is incredibly stupid!! This can’t be happening!!

So the OpenAI endpoint is:{{ OpenAIthreadID }}/runs

but I was using:{{ OpenAIthreadID }}/runs/

That last “/” was making the request to fail !!
I just can’t believe it I lost 20 hours, reading the entire internet, testing everything, and it was a simple “/” at the end of the url.


Hey @Cristian_DF,

It happens sometimes, glad you have found the issue.