I think you need to share your whole workflow to get help with this. Did you get any info if you copy and past the error into chat GPT?
Not sure if you did, but you have to add the match_documents function to your database. You can run the code below in your supabase sql editor in the left menu.
Make sure you already have a table named documents and that it has the correct vector size. If you don’t have it or the vector size is incorrect drop the table and run this:
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
Then to create the match_documents function run this:
-- Create a function to search for documents
create function match_documents (
query_embedding vector(1536),
match_count int default null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;