When I try to query the ai agent about the content of my vector database, the following erro pops up “Error searching for documents: 42702 column reference “id” is ambiguous It could refer to either a PL/pgSQL variable or a table colum”
I sitll cant shake of the feeling that the issue is with my Supabase setup, because i used this projects for a few tests and the tables have been repurposed a few times. So, while my checks above are ok, there are a few things that seem odd:
when i ask supabase ai help to confirm the tables dimensions are 1536, it says it doesnt find the table.
also, from the 3x tables that i have in supabase for the the project, only 1x shows up in the schema visualizer . I feel like there should be 3x seperate schemas for each table?
Maybe I should setup a new project and re-do everything - what do you think? If the recommendation is to re-start, does someone do you maybe have a good tutorial on youtube or other learning mateiral for best practice for Supabase setups, so that i can avoid such a mess in if i should re-setup everything (youtube mainly has a lot of 20min intros which i have got done)?
Error 42702: Column reference ambiguous is usually resolved by renaming columns or parameters in your PL/pgSQL function to avoid name conflicts. The Supabase Vector Store node ignores the “Table Name” field and always uses a function that references the documents table, so you must either align with that name or adapt the function manually. Whether you use documents or a custom name, your PL/pgSQL function must be consistent and without variables that cause name ambiguity.
You should review and correct the match_* function in Supabase.
Make sure your functions — such as match_hkex_ipo_listings or match_documents — do not have ambiguous names. This requires prefixing parameter or column names to avoid collisions and using explicit aliases, for example:
SELECT c.id,
...
FROM hkex_ipo_listings c
WHERE c.id = match_hkex_ipo_listings.id_param;
Renaming as many variables as possible to avoid duplicating common names like id, content, etc.
Thanks Erick! This fixed that particular error, but now another popped up - i will mark your reply as ‘solution’, but allow me to share that new (but related) error here with you (i know technically I should start a new post, but i have been going in circles on this for days now).
NEW ERROR: “Error searching for documents: PGRST202 Could not find the function public.match_hkex_ipo_listings(filter, match_count, query_embedding)”
(there are other posts about this error in the forum, but i think my issue relates more to fundamental setup issue of my supabase table than a technical issue) Error with Supabase Vector Stores)
NARROWING DOWN THE ERROR: I believe the error is caused by my embeddings column in the supabase table being empty: In Supabase i did the following steps:
i created the table ‘hkex_ipo_listings’ and populated it manually via a CSV upload
then i made the table vector enabled using the sql editor
added the match fucntion for that table with the sql editor
because i didnt upload the table content via n8n with an embedding model (but manually uploaded a CSV, i do have an embedding column in my table, BUT this embedding column is empty → do you think this is the issue: n8n’s supabase vector store has no vector embeddings to reference? (in my previous workflows were i uploaded a file via n8n the chunking and embedding happened ‘automaticlly’, so i was under the impression a supabase table would have that as a basic functionality’?)
Sorry again about sneaking in this ‘follow up quesiton’ and if the question isnt the smartest, Erick, but would appreicate your guidance on this a lot! I am really just trying to link the n8n vector node up to a supabase table (not a vectorised document, just a simple table like in excel -some numerical inputs, some text-) and I feel like it should be so easy.