Openai Response Headers

Describe the problem/error/question

When using the openai node (specifically the analyze image node), its difficult to predict the api rate limit using the Wait module alone. The response from openai provides the remaining tokens, and the wait time until it resets. But I cannot see where these headers are available. If they were exposed, I could simply add a wait module that dynamically waited until the expire time. Is there a way, besides using the http node, to get these headers?

What is the error message (if any)?

I get a rate limiting error

Information on your n8n setup

  • n8n version: 1.94.1
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: macOS 15.4.1
1 Like

Hi @nvee

You can get some of the information back from the n8n api. Not sure if it has what you are looking for, but here’s a flow that I use to see execution info.

Thanks for the reply. When I try running this scenario I get the following error:

Problem in node ā€˜n8nā€˜
The resource you are requesting could not be found

Any ideas?

1 Like

@nvee

Did you create an api key?

Yes, I didn’t realize that the base url required the /api/v1 part in it as well. Now it’s working, but I don’t see any information pertaining to the openai headers being returned. This is all I’m seeing:

{
  "id": "1234",
  "finished": false,
  "mode": "manual",
  "retryOf": null,
  "retrySuccessId": null,
  "status": "running",
  "createdAt": "2025-06-03T14:36:50.244Z",
  "startedAt": "2025-06-03T14:36:50.268Z",
  "stoppedAt": null,
  "deletedAt": null,
  "workflowId": "abc",
  "waitTill": null
}
1 Like

@nvee

Try adding ā€œInclude Execution Detailsā€ .

That did give me some more info, but doesn’t seem to return the usage stats. I know these are in the header sent back from openai. Is the only alternative to use the HTTP Request module?

1 Like

@nvee

Yeah, if its not in the API, I’m not sure if its stored anywhere. The HTTP Request might be the best for now.

Just wanted to bump this to see if anyone else from the n8n team has a solution to this? Trying to guess my current rate limit usage is not a great solution. I know alternatively I can use the http module, but seems like this would be an easy fix to include in the non-simplified response from the openai modules.