I updated self-hosted n8n today to 0.226.2 via docker and came back to using ChatGPT node after a while.
Prior to the update, I had no issues with larger prompts in Chat completion using ChatGPT 4, but today I am only getting timeouts or the node is working on the completion for 10-15 min. (before I stop it manually) which should take no more than 5 minutes tops. GPT 3.5 turbo works fine.
I notice that the API is billing me a few times over what I should be billed for each such completion, as if the node was looping the prompt without providing output.
OpenAI claims it is fully operational and have had no issues reported today.
Have you experienced anything similar today?
Hi @Rafix - sorry to hear you’re having trouble with this!
Thanks for sharing details on your setup. Would you have an example workflow you could share here for testing purposes? That would really help us troubleshoot what might be the difference between ChatGPT4 and 3.5 turbo.
I don’t think the problem is with the workflow as the same issue persists, when I separate the OpenAI node and run it without being connected to the other nodes.
Here is a screenshot of the node with the selected options and the prompt in it.
GPT 3.5 turbo works fine, GPT 4 pops up an error.
What I noticed though is that the API records the call and it bills me as if the prompt produced a completion and it actually it charges me at least twice as much, as if it had to do it more than once I guess.
Is it really taking 5 minutes for the workflow to stop? It would be very handy if you could provide the actual workflow json or prompt text so we can give it a go, Do you also know what version of n8n you were using before the update?
Sure. Here’s the node that causes problems.
I can’t remember what version I was on before the update, but I remember being two or three new versions behind. Now, I am at 226.2, so it might have been a version around 223.
I have just given it a go on my home instance and it appears to work ok for me.
Is it working today for you are you still seeing the same issue? It did take a long time for the reply to come back as well which was unexpected but the same request from Postman also had a slow response.
Thank you very much @Jon for testing it.
Unfortunately, the same issue persists. I made a new workflow with this node only and created new credentials with a different API.
Now, I am thinking that it might have something to do with the fact that I only used GPT4 with the amount I was granted by OpenAI to test it and maybe it has been revoked for some reason.
It doesn’t make sense, though.
I have been using it with a paid account so it could be that trial accounts are still limited, Have you tried making the same request with something like Postman?
I haven’t tried Postman yet, but I will look into it.
I thought that maybe the reason for the error might be that timeout limits were removed during the trial period and now they are back on, but now I am not so sure since you were able to generate the result on a paid account.
Just to be sure, you used an account that is no longer using the trial free credit, right?
You got it, my trial credit ran out so I am no longer playing around with their money
That must be the reason for the error. The timeout limit is set to 5 min. now, but it wasn’t before (when I played with their money;), that’s why the completions are not being generated.
Yours must have taken a bit under 5 min then, and it checks out since the completion is 698 tokens and not the regular twice as many in my case.
Unless the previous version of n8n had somehow been omitting the timeout limits?!
I don’t think we control the timeout limits but a way to test would be to install an older of version of n8n and see if it works.
Thanks for your help @Jon.
I have a similar problem. I made 2 differents workflow using gpt 4 api. for 1 of them after fews minutes it was working the other never worked. i start again the one who work once and it run also for long time without answering. i did check the billing it was continusly billing me more for the api without having the result. i tested like 10mn more with no answer. I used different api key for the 2 worflows. this is my version of n8n: 0.225.2 it is a self host linux serveur.
Is there a chance that something has changed in your subscription, like, for example, your trial period has ended?
I immediately assumed that there was something wrong with the updated ChatGPT node after I updated to the most recent version, but it might have been the fact that my trial period ended then as well.
The outputs I generated with my workflow almost always exceeded the timeout limits when I was on trial period and was using the credits I had been given, but I never got the timeout error when I was on the trial period.
Now, I have to split the prompt into multiple nodes to avoid the timeout.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.