PGVector node does not recall details from uploaded docs

I was able to create a Workflow that uses a localized Docker container for postgres, and combined this with the PGVector node in n8n to write the ingested document’s vectors and metadata to a database.

I have a 2nd Workflow, using the Chat icon within the workflow. Following a similar repeated chat query as from Cole’s use case, in my example is quite obviously is not interpreting or incorporating my query.

The TXT file is:

christmas plans for 2024.

list of kids presents:
jane - bean bag chair, penguin teddy
john - ferrari lego technics, iPad mini
terry - club scarf, money

events:
13/12/24 - John nativity
14/12/24 - Tennis club social
27/12/24 - Panto at Palladium
28/12/24-4/1/25 - New Year Getaway

And I put in these chat queries:
“What presents does John want?”
“What events do we have over Christmas?”

The response obviously is not including the uploaded document.

Information on your n8n setup

  • **n8n version: 1.67.1
  • **Database (default: SQLite): Postgres
  • **n8n EXECUTIONS_PROCESS setting (default: own, main): Design provided
  • **Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • **Operating system: Docker

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey @roastbullish , instead of AI Agent I would try Question and Answer Chain if you need the answers from the vector store only. AI Agent is much more versatile app and you need a more specific instruction what you expect of it.

Thanks @ihortom for the advice. I have been mimicking Cole Medin’s Workflows on Youtube.

I replaced the AI Agent node to a Q&A Agent node, and reran the query. I also supplied some extra “context” for the Embeddings AI Model, again replicating Cole’s approach, with:

You are a personal assistant who helps answer questions from a corpus of documents when you don’t know the answer yourself.

With my Christmas Plans TXT file uploaded to the PGVector DB, this was the chat experience:

So I can see the PGVector node returning its text in this chain. But the final response to me includes none of this.

I have tried to replace PGVector with Supabase (as per the Youtube videos / follow-throughs I am viewing). However, I get to the same problem.

I installed the vector extension, and a new match_documents function, as per the QuickStart Guide linked from the n8n docs.

I can see the new row in the Supabase DB as I upload my same xmas plans.txt document.

When I query this, with a Q&A Agent, the agent is simply does not use this content, even through I can see it in the chained event of the chat window.

You still need to instruct the LLM to use your custom data. Be more specific how it to process your question. For example,

Thanks @ihortom - I have tried numerous replacements and deviations to the System Prompt text, with no change to the outcome.

I messaged Cole Medin on his channel, and 1 of his responses was:

Which model are you using? I’ve had this happen with smaller models where they just seem to ignore the output from the RAG nodes.

Which seems to be exactly my issue. I am awaiting further responses, as he may have some further insights. Have other n8n’ers experienced this? When responding to this thread, have the tried this using Ollama.app on an Apple Silicon MacBook? I happen to have an M1 Air.

Alternatively, is there a way to run a different local Ollama model for this purpose? I had chosen the Embeddings-specific models for this purpose.