When using the openai node (specifically the analyze image node), its difficult to predict the api rate limit using the Wait module alone. The response from openai provides the remaining tokens, and the wait time until it resets. But I cannot see where these headers are available. If they were exposed, I could simply add a wait module that dynamically waited until the expire time. Is there a way, besides using the http node, to get these headers?
What is the error message (if any)?
I get a rate limiting error
Information on your n8n setup
n8n version: 1.94.1
Database (default: SQLite): SQLite
n8n EXECUTIONS_PROCESS setting (default: own, main): main
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
You can get some of the information back from the n8n api. Not sure if it has what you are looking for, but hereās a flow that I use to see execution info.
Yes, I didnāt realize that the base url required the /api/v1 part in it as well. Now itās working, but I donāt see any information pertaining to the openai headers being returned. This is all Iām seeing:
That did give me some more info, but doesnāt seem to return the usage stats. I know these are in the header sent back from openai. Is the only alternative to use the HTTP Request module?
Just wanted to bump this to see if anyone else from the n8n team has a solution to this? Trying to guess my current rate limit usage is not a great solution. I know alternatively I can use the http module, but seems like this would be an easy fix to include in the non-simplified response from the openai modules.