Google has deprecated their text-embedding-004 model (Gemini deprecations | Gemini API | Google AI for Developers) so I’ve had to switch to using gemini-embedding-001. The number of dimensions has changed from 768 to 3072 with the new model. When I encode the data to save data into my Supabase database the embedding is using 3072 dimensions, but when I query the database n8n defaults to 768 dimensions, so it produces an error. There isn’t a way to configure the search embedding RETRIEVAL_QUERY option in n8n, so I am stuck - the save embedding size is different to the query embedding size. There doesn’t seem to be a way of using Google’s embedding model with the Supabase Vector Store node in n8n because of the discrepancy.
What is the error message (if any)?
When using the gemini-embedding-001 model for inserting into a 768 vector store:
Error inserting: expected 768 dimensions, not 3072 400 Bad Request
When using the gemini-embedding-001 model for searching a 3072 vector store
Error searching for documents: 22000 different vector dimensions 768 and 3072 null
Hi @Ollie_Harridge, welcome back to the n8n community! The only workaround is to use a 768-dimension model or a custom flow via HTTP Request outside the Vector Store node. This happens because the n8n Vector Store search currently generates query embeddings with a fixed size of 768 dimensions, while the gemini-embedding-001 model produces 3072-dimension vectors, causing a mismatch between insert and search.
@Ollie_Harridge If you can switch away from google, OpenAI’s text-embedding-3-small can do 768 dimensions, as long as you configure it in the model node.