At the moment we don’t have anything to handle rate limits nicely so it would be a case of splitting your data up into smaller chunks using the loop items node and adding a wait at the end for a second or whatever the limit may be so something like the below should work if you are on the tier 1 limit.
I see! Thank you for clarity that the best option would be a loop.
I used the HTTP batch-limit built-in option, by using the OpenAI API instead of the OpenAI node. Seems that it would have the same result than the Loop while simplifying the workflow.