Supabase Vector Store not sending data into the embeddings table

Describe the problem/error/question

My Supabase vector store seems to be set up properly, I used the SQL string and generated the tables.

The issue is that my automation does finish processing but no data is sent to the vector store,

I know the database is working because my postgres node sent data into the secondary table, but nothing from the vector loader

(note that the loop over items is not plugged yet, I’m sending data directly from the trigger since is just one item for testing purposes)

What is the error message (if any)?

No message

Please share your workflow

Share the output returned by the last node

[
{
“metadata”: {
“source”: “blob”,
“blobType”: “text/plain”,
“loc”: {
“lines”: {
“from”: 1,
“to”: 17
}
},
“file_id”: “1AZA-avMrZmnSDeaVPbabLgCTBIPwNIMvH8Jy1vsP3yk”,
“file_title”: “My Skool Story”
},
“pageContent”: “My Skool Story\r\nMy Accomplishments on Skool:\r\n1. I’ve scaled my free Skool community past 2,000 members (with zero ads)\r\n2. 2. I’ve scaled my paid offer on Skool from $0 to $30k/mo profit in under 60 days\r\n3. I’ve won the Skool Games in Sept. 2024 and met the founders of Skool.com\r\n4. I’ve made over $100,000 profit selling offers through my Skool community\r\n5. I’ve helped generate tens of thousands of members & hundreds of thousands of dollars in profit for my client’s and their own Skool communities.\r\nMy Skool Timeline:\r\nMay 1st 2024 —> Started my first FREE community\r\nMay 10th 2024 —> Got my first 140 FREE members\r\nMay 21st 2024 —> Got my first PAID client on Skool ($1,000 profit)\r\nMay 22nd 2024 —> Got my first member testimonial/case study\r\nMay 30th 2024 —> Got my first 288 FREE members2\r\nJune 5th 2024 —> Working with 3+ PAID clients ($0-$1k)\r\nJune 28th 2024 —> Got my first 500 FREE members\r\nJuly 15th 2024 —> Decided to GO FOR THE SKOOL GAMES!\r\nJuly 30th —> Got my first 800 FREE me”
},
{
“metadata”: {
“source”: “blob”,
“blobType”: “text/plain”,
“loc”: {
“lines”: {
“from”: 17,
“to”: 29
}
},
“file_id”: “1AZA-avMrZmnSDeaVPbabLgCTBIPwNIMvH8Jy1vsP3yk”,
“file_title”: “My Skool Story”
},
“pageContent”: “mbers\r\nAugust 1st 2024 —> Launched my first PAID offer on Skool\r\nAugust 22nd 2024 —> Scaled to $10,000/month in 22 days\r\nAugust 30th 2024 —> Scaled to $16,000/mo profit\r\nSeptember 14th 2024 → Scaled to $20,000/month\r\nSeptember 27th 2024 → Closed $6,000/mo profit in 24 hours\r\nSeptember 28th 2024 —> Scaled to $28,000/mo profit\r\nSeptember 28th 2024 —> Won Skool Games & ranked Top 1% on Skool\r\nSeptember 29th 2024 —> Scaled to $31,500/mo profit on Skool with ZERO ads\r\nNovember to December 2024 —> Focused on product & client results\r\nJanuary 1st 2025 —> Went ALL IN on helping people scale their Skool communities\r\nFebruary 28th 2025 —> Closed $40,500 USD with brand new offer on Skool\r\nMarch to Dec 2025 —> Helping people scale profitable communities on Skool”
}
]

Information on your n8n setup

  • n8n version: 1.85.4
  • Database: Supabase
  • n8n EXECUTIONS_PROCESS setting (default: own, main): I am not sure where to get that info.
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
  • Operating system: Windows11

Hi @Luar_AS,

that’s strange. How did you create the documents table on your Supabase instance, did you use the LangChain template recommended by Supabase?
I just tried creating a new documents table with this query:

-- Enable the pgvector extension to work with embedding vectors
create extension vector;

-- Create a table to store your documents
create table documents (
  id bigserial primary key,
  content text, -- corresponds to Document.pageContent
  metadata jsonb, -- corresponds to Document.metadata
  embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);

-- Create a function to search for documents
create function match_documents (
  query_embedding vector(1536),
  match_count int default null,
  filter jsonb DEFAULT '{}'
) returns table (
  id bigint,
  content text,
  metadata jsonb,
  similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
  return query
  select
    id,
    content,
    metadata,
    1 - (documents.embedding <=> query_embedding) as similarity
  from documents
  where metadata @> filter
  order by documents.embedding <=> query_embedding
  limit match_count;
end;
$$;

And this workflow

And data got populated successfully. Could you try with this simple workflow to see if that works?

Oleg

1 Like

Yeah I used this one from the docs:

-- Enable the pgvector extension to work with embedding vectors
create extension vector;

-- Create a table to store your documents
create table documents (
  id bigserial primary key,
  content text, -- corresponds to Document.pageContent
  metadata jsonb, -- corresponds to Document.metadata
  embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);

-- Create a function to search for documents
create function match_documents (
  query_embedding vector(1536),
  match_count int default null,
  filter jsonb DEFAULT '{}'
) returns table (
  id bigint,
  content text,
  metadata jsonb,
  similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
  return query
  select
    id,
    content,
    metadata,
    1 - (documents.embedding <=> query_embedding) as similarity
  from documents
  where metadata @> filter
  order by documents.embedding <=> query_embedding
  limit match_count;
end;
$$;

But nothing is changing in the database itself, it’s weird

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.