Google deprecating text-embedding-004 but gemini-embedding-001 doesn't work

Describe the problem/error/question

Google has deprecated their text-embedding-004 model (Gemini deprecations  |  Gemini API  |  Google AI for Developers) so I’ve had to switch to using gemini-embedding-001. The number of dimensions has changed from 768 to 3072 with the new model. When I encode the data to save data into my Supabase database the embedding is using 3072 dimensions, but when I query the database n8n defaults to 768 dimensions, so it produces an error. There isn’t a way to configure the search embedding RETRIEVAL_QUERY option in n8n, so I am stuck - the save embedding size is different to the query embedding size. There doesn’t seem to be a way of using Google’s embedding model with the Supabase Vector Store node in n8n because of the discrepancy.

What is the error message (if any)?

When using the gemini-embedding-001 model for inserting into a 768 vector store:

Error inserting: expected 768 dimensions, not 3072 400 Bad Request

When using the gemini-embedding-001 model for searching a 3072 vector store

Error searching for documents: 22000 different vector dimensions 768 and 3072 null

Information on your n8n setup

  • n8n version: 1.123.14
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud,

@Ollie_Harridge Since the Supabase Vector Store node hardcodes the dimension check, bypass it for queries:

For Inserting (use Vector Store node as normal):

  • Keep using the Supabase Vector Store node with gemini-embedding-001

  • Ensure your Supabase table has 3072 dimensions

For Querying (use HTTP Request):

  1. Add an Embeddings Google Gemini node to generate your query embedding

    • Model: gemini-embedding-001

    • Input: your search query

  2. Add an HTTP Request node to query Supabase directly:

    • Method: POST

    • URL: https://[your-project].supabase.co/rest/v1/rpc/match_documents

    • Authentication: Generic Credential Type → Header Auth

      • Name: apikey

      • Value: Your Supabase anon key

    • Body:

    {
      "query_embedding": {{ $json.embedding }},
      "match_threshold": 0.7,
      "match_count": 10
    }
    
    

You’ll need this Supabase function:

create or replace function match_documents (
  query_embedding vector(3072),
  match_threshold float,
  match_count int
)
returns table (
  id bigint,
  content text,
  metadata jsonb,
  similarity float
)
language sql stable
as $$
  select
    id,
    content,
    metadata,
    1 - (embedding <=> query_embedding) as similarity
  from your_table_name
  where 1 - (embedding <=> query_embedding) > match_threshold
  order by embedding <=> query_embedding
  limit match_count;
$$; 

Hope this helps!

1 Like

Hi @Ollie_Harridge, welcome back to the n8n community! The only workaround is to use a 768-dimension model or a custom flow via HTTP Request outside the Vector Store node. This happens because the n8n Vector Store search currently generates query embeddings with a fixed size of 768 dimensions, while the gemini-embedding-001 model produces 3072-dimension vectors, causing a mismatch between insert and search.

1 Like

@Ollie_Harridge If you can switch away from google, OpenAI’s text-embedding-3-small can do 768 dimensions, as long as you configure it in the model node.

1 Like