You can use them already with a Cloudflare account by using the HTTP node to:
curl -X POST \
“https://api.cloudflare.com/client/v4/accounts/{account-id}/ai/run/@cf/mistral/mistral-7b-instruct-v0.1” \
-H “Authorization: Bearer {api-token}” \
-H “Content-Type:application/json” \
-d '{ “prompt”: “What is grouped query attention”, “stream”: true }'
API Response: { response: “Grouped query attention is a technique used in natural language processing (NLP) and machine learning to improve the performance of models…” }
I’m also using Groq, right now through an HTTP node, but I would love to have it included as a Model. It should be the same as Mixtral Cloud, but with another baseUrl. Or If you can provide the instructions to create it, I’m in.
Any progress with this?
Right now groq is the fastest and cheapest api service for LLM, and the righ now support Gemma, and with the recent GemmaCoder launch maibe the will also support it.
I think create a Model Node for Groq is quite easy for those who had create another… Hope it is release soon.
Thank you.
Amazing project, best people.
that’s is super awesome; but if I don’t misunderstood Perplexity add a little touch of having access to Internet which we don’t have by using directly Mistral.