Tried to create a Local Model that extracts contents of a .PDF file, allowing the model to answer questions regarding it. The HuggingFace Embedding node seemed to work, the error stems from the Supabase Vector Store node.
Error Message:
Error searching for documents: PGRST202 Could not find the function public.match_documents(filter, match_count, query_embedding) in the schema cache Searched for the function public.match_documents with parameters filter, match_count, query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.
Hi @LeRiVal perhaps this response is not too late .
**This error
occurs because the required function match_documents is not properly set up in Supabase. Follow these steps to resolve it:**
Ensure the Supabase Table and Function Are Correctly Set Up
According to the n8n documentation, the table and function must be properly configured.
Refer to the official Supabase LangChain guide and execute the following SQL script in your Supabase database:
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector(1536),
match_count int default null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;
The embedding dimension (vector(1536)) must match the model you are using. If you are using a different embedding model (e.g., Cohere, OpenAI, or another provider), check its required vector size and update the SQL accordingly.