Supabase Vector Store node returns empty output (no error) while SQL query returns matches

Describe the problem/error/question

:puzzle_piece: Context
I built an AI Agent in n8n that answers user questions using RAG (Retrieval Augmented Generation).
The goal is to ensure the agent replies strictly based on a Google Sheet that contains predefined Questions & Answers.
Architecture
Google Sheet → contains Q&A pairs (source of truth)
Data is embedded and stored in Supabase (pgvector).
n8n workflow:
User question
Embedding generation
Supabase Vector Store node → retrieve similar Q&A
AI Agent responds based only on retrieved results
:bullseye: Expected Behavior
When a user asks a question similar to one in the sheet:
Supabase Vector Store node should return matching rows
Agent should respond using retrieved answer
:red_exclamation_mark: Actual Problem
The Supabase Vector Store node returns no output, and no error is thrown.
However:
Running the same query directly in Supabase SQL returns 4 matching results.
This suggests the data exists and similarity search works.
:magnifying_glass_tilted_left: What I Verified
:white_check_mark: Data exists
Running SQL:
select * from documents
order by embedding ↔ query_embedding
limit 4;
returns expected rows.
:white_check_mark: Embeddings exist
Embedding column is populated.
Dimensions match the model used.
:white_check_mark: Query runs successfully in Supabase
Manual query returns results.
:red_exclamation_mark: But in n8n:
Node executes successfully
Output is empty ()
No error message
:red_question_mark: Questions
What could cause the Supabase Vector Store node to return empty results while SQL returns matches?
How can I debug what query n8n actually sends to Supabase?
:test_tube: Additional Details
Using Supabase pgvector
Using OpenAI embeddings
Table structure: id, content, embedding, metadata
No filters applied in the node
Node executes without errors
:folded_hands: Any guidance is appreciated!

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: Version 2.3.2
  • Database (default: SQLite): supabase
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: mac
2 Likes

Hi @Hanan_Shihady, welcome :slightly_smiling_face:

How did you initialize your database? I suspect the search function might be missing,

Did you initialize it using this guide?

this is mentioned in n8n docs as well:

On my side, I initialized it using this approach, and it’s working as expected.

1 Like

hey, this is almost always a mismatch between the embedding model you’re using in n8n vs what was used to store the vectors. The Supabase Vector Store node uses whatever embedding model you’ve connected to it, so if you embedded your docs with text-embedding-ada-002 but the node is using text-embedding-3-small or something different, the dimensions might match but the vectors themselves won’t align and you get zero results. Double check the exact model on both sides.

1 Like

hey, the most common reason for this is a mismatch between the embedding model you used when inserting the data vs what the Supabase Vector Store node is using to embed the query. Even if dimensions match, different models produce incompatible vectors so cosine similarity returns nothing useful. Can you confirm you’re using the exact same OpenAI embedding model (like text-embedding-ada-002 or text-embedding-3-small) in both the insert workflow and the retrieval node?

1 Like

Hey achamm, thank you for your response.
yes, I confirmed that I used text-embedding-3-small for inserting the data and in Supabase Vector Store node.

Hi Mohamed,
Yes this is the guide I used.
I will write a new comment explaining the debug I did. Please share your thoughts of the change I made.
Thanks for your time and effort :slight_smile:

Hi @Hanan_Shihady

Great, assuming you copy-pasted the SQL from the guide exactly as it is:

So that SQL creates a table named documents,

However, looking at your attached workflow, I see that your table is named breastfeeding_kb, this mismatch is likely the reason you are getting empty output (the function is probably still querying the documents table, not your new one),

I recommend starting fresh and keeping the default names, I tested this exact setup on my end and it works without issues..

Supabase Vector Store returned no results — debugging journey & findings

I ran into an issue where the Supabase Vector Store node returned empty results in n8n, even though my embeddings and data were correctly stored.

:magnifying_glass_tilted_right: Debug step (Option A — direct RPC test)

I created an HTTP Request node to call the Supabase RPC (match_documents) directly.

Result:

  • RPC returned matching rows :white_check_mark:

  • Embeddings valid (vector(1536)) :white_check_mark:

  • Supabase access & permissions OK :white_check_mark:

:backhand_index_pointing_right: This proved the problem was not Supabase, embeddings, or RPC.


:white_check_mark: Fix: wrong operation mode

The node was set to Get Many, which does not perform similarity search.

Changing the mode to:
:backhand_index_pointing_right: Retrieve Documents (As Tool for AI Agent)
enabled vector search.

:warning: New issue discovered: AI Agent rewrites tool input

The core issue is that the json.input field generated by the AI Agent does not preserve the meaning of the user’s original question.
As a result, the tool receives a different query than what the user actually asked.

Example

  • User question:
    “My baby feeds for a short time and then unlatches. Is this normal?”

  • Tool received ($json.input):
    “Tips to prevent nipple cracks during breastfeeding.”

Because the rewritten query has a different meaning, the Supabase vector search returns no results, even though relevant content exists for the original question.

:bullseye: What I need

I’m looking for a deterministic way to ensure the AI Agent generates precise and faithful tool input, ideally by:

  • Passing the user’s original question unchanged, or

  • Preserving the original meaning when generating the tool input.

Has anyone found a reliable method to control or constrain tool input generation in the AI Agent?

Great debugging journey! The issue you’re now facing (AI Agent rewriting the query before passing it to the vector store tool) is a known LLM behavior, and there are a few reliable ways to control it:

1. Instruct the Agent in the System Prompt (simplest fix)

Add explicit instruction in the AI Agent’s System Message:

When using the knowledge_base tool, ALWAYS pass the user's original question exactly as they wrote it. Do NOT rephrase, translate, or summarize the query before passing it to the tool.

This works in most cases because LLMs follow system-level constraints well for tool calls.

2. Use a “Query Extractor” node before the Agent

Instead of letting the Agent decide what to pass as tool input, pre-compute the embedding query outside the Agent:

  • Extract/clean the user question in a Code node
  • Pass it directly to the Supabase Vector Store node (in Retrieve Documents mode, not as a tool)
  • Then pass the retrieved context + original question to a regular Chat/LLM node

This removes the Agent’s ability to rewrite the query entirely — the retrieval becomes deterministic.

3. Use a sub-workflow as the tool

Set up a sub-workflow that accepts $json.input and immediately uses $json.input verbatim for the vector search (no Agent reformulation possible at that stage). This is the most robust approach for production RAG.

4. Add a “query” field description in the Tool node

In the Supabase Vector Store tool settings, the Tool Description field influences how the Agent calls it. Be very specific:

Use this tool to find answers. Input must be the user's exact original question, word for word.

I’d start with option 1 (system prompt constraint) — it’s the quickest to test. If the LLM still paraphrases, go with option 2 (pre-retrieval before the Agent). Hope this helps resolve your new issue!