Please add support of the new OpenAI features [done]

Is there any update on this? I’m still using n8n cloud v 1.17.1.
Wondering if I should take the risk to update to the latest stable release in order to access new OpenAI features.
My most pressing need is to send properly formated requests to the gpt-4-vision-preview model that was released more than 2 months ago.

Hey @tomtom,

Our OpenAI node overhaul is in progress and the changes can be found below if you are interested in the code.

Hey @Jon. Thanks for your help! Definitely appreciate the swift reply :slight_smile:
I’m using n8n cloud so I’m not interested in the code but happy to see some progress is being made.

Is it fair to assume that sending requests to gpt-4-vision-preview might be available in the next couple of weeks given that it’s being tackled as we speak and that you release stable versions several times a month?

It’s not a life or death situation but it will help me prioritize my workload. I’d rather wait a bit more and use native n8n features than scramble to make something work-ish right now given the scope and importance of this project for me.

1 Like

Hey @tomtom,

I think within the next couple of weeks would be fair, I think it is just waiting for review at the moment.

1 Like

Thanks a lot @Jon.

I’m manually prototyping image-based prompts in the ChatGPT UI before switching to the ChatGPT API.
I notice that some prompts are resource intensive on OpenAI’s end and I need to wait more than 3 minutes to get the full (streamed) response. I won’t need the streaming capacity when I’ll use the gpt-4-vision-preview model in the API through n8n cloud but I’m worried about the timeout limit. If I recall correctly, a workflow is automatically stopped if it runs longer than 5 minutes.

The thing is, I need to run this type of workflow hundreds of times…
Creating a workflow that runs this workflow X times might be a solution? (as long as the inner workflow runs for less than 5 minutes)

I ended up using the HTTP Request node. So far, so good.
FYI – gpt-4-vision-preview sometimes answers “Sorry, I cannot help with you that” (or similar) on the first pass on a small % of images for which I prompted it for a description (maybe 3%). It’s not related to TPD, RPD, TPM API limits because I paced it to run very slowly. Looks like a second pass might work just fine.

I think all of this has now been added as part of the 1.29.0 release which contains the overhaul for the OpenAI node.

1 Like

So many overhauls, our tutorials become old too fast :smiling_face_with_tear:

Just kidding, great to see the updates!

2 Likes

We released a new version of the OpenAI node in 0.129.0 — hopefully that covers most of the requests here.

1 Like