I’m trying to make a RAG based on Yandex Disk and Yandex GPT
Yandex GPT has released an update that allows you to work with models through OpenAI nodes
The chat model itself works fine.
But when connecting it to Supbase for embeddings, an error occurs:
“Embeddings OpenAI2: Error in sub-node ‘Embeddings OpenAI2‘
400 Base64 encoding format is not supported”
Or " 400: “Array input must contain exactly one string”" if there are more than 1 chunks…
Help, friends, please!
What is the error message (if any)?
Embeddings OpenAI2: Error in sub-node ‘Embeddings OpenAI2‘
400 Base64 encoding format is not supported
Please share your workflow
Share the output returned by the last node
Embeddings OpenAI2: Error in sub-node ‘Embeddings OpenAI2‘
400 Base64 encoding format is not supported
This error most likely means that the n8n tries to send a chunk to Yandex GPT for embedding with encoding_format set to base64 (or, most likely, without the value set, which defaults to base64), where Yandex expects this to be a float:
I don’t believe this can be done with the Embeddings OpenAI node. I think if you really need to use Yandex GPT, your best bet could be using Yandex’s rest api and semi-manually injecting them into supabase.
Thanks for the prompt response, jabbson! As you can see in my workflow, I use the standard Supabase Vector node - only “special” types of nodes can be “attached” to it (and intermediate nodes cannot be inserted because of the dotted lines). That’s why I used a ready-made OpenAI node. How do I set up what you’re talking about?
You would setup an http request node to hit the Rest API followed by the regular Supabase node to insert the row for each embedding. Alternatively you could also create row using Supabase Rest API, in this case you would end up having two http request nodes.