OpenAI is returning limited characters lenght texts
I don’t understand why the texts from chatGPT are coming with only this limited amount of characters since every tutorial on this integration I see, this problem just doesn’t happen.
OpenAI is returning limited characters lenght texts
I don’t understand why the texts from chatGPT are coming with only this limited amount of characters since every tutorial on this integration I see, this problem just doesn’t happen.
This isn’t the chat models, you are using the older davinci-003 model. ChatGPT would be 3.5 Turbo. Check the OpenAI documentation to use the right model.
Hey @Andre_Gustavo,
Under the options you can set the Maximum Number of Tokens, The API default is 16 so it might be worth bumping it up a bit to say 1024 or 2048.
This gave me an error when I applied a version of GPT other than text-davinci-003. It only worked for me when I changed the maximum number of tokens, but since the GPT version is outdated, I think the generation process of texts will be out of date too.
@Jon,
as I can see, Tokens here count only for the reply volume. So when I set 2 tokens, I get two words as a reply. So it doesn’t count the prompt volume. But GPT pricing includes tokens for both input and output.
Do I understand correctly, that n8n OpenAI node doesn’t take into account tokens required for the prompt?
Hey @artildo,
We use the API as it is we don’t have any control over that is and isn’t billed for, I think the token option is only for the response so you would still have the bill for the data being sent the only difference is you are controlling how many tokens the API can reply with.
@Jon, thank you. I got it. So in n8n the token control is only for the output.
Hey @artildo,
It isn’t really an ‘in n8n’ thing it is an OpenAI API definition.
max_tokens
integer
Optional
Defaults to 16
The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
So it looks like OpenAI say the total token count of your prompt and max tokens can’t be longer than the model supports other than that yes the tokens option is for the generattion.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.