Would love to have this new operation and model available inside a node.
My use case:
This would mean more complicated prompts and AI automation tools can be generated. Instead of having an AI only answer one thing, we can now use the chat API endpoint to provide more context and therefore smarter answers.
Is that for Text Complete or Chat Complete? GPT should only be available for Chat Complete but the node will now show the options that your API key has access to.
You can find more information on what models can be available in the OpenAI docs here: OpenAI API
I come to add this because I figured it out at late night.
It seems that we need to quit the screen after selecting chat or text and enter again for see the changes reflected (so we can use ChatGPT on Chat this way )
I suspect that alternative services require different authentication and return results in different formats. So the right way would be to create separate node for each provider. If you want to use different ones, itâs always possible to drop several nodes on the canvas and add some routing via Switch to pick the needed service.
Letting people decide their own AI endpoint would allow everyone to use the LLM they prefer (now that OpenAI doesnât offer the premium is even more important).
Iâve seen that there is a community one, but it was not working when I tried it. (n8n-nodes-cheapai - npm)
The problem is some of the proxies donât implement the API in the same way which causes issue. It would be better to make a node for that service so that it will work without needing any changes that are specific for it.