A local RAG workflow not giving me the desired results

Hi everyone!

I’m quite new to the n8n platform and to this community, so please bear with me if this is a beginner’s mistake — still getting familiar with the flow logic and node interactions.

I’ve set up an n8n environment running locally on an Ubuntu 22.04 LTS machine, using Docker & Portainer. The machine has an Nvidia GPU, and I’ve successfully installed the main tools involved: Ollama, Qdrant, and n8n.

What I’m trying to build:

The flow is designed to process a transcript (usually an interview with some patient information), and I want to query this transcript via a set of predefined questions — each with its own custom prompt.

I’ve used the following structure in n8n:

  • chatTrigger to handle incoming questions.
  • if statement to check if a file was uploaded or if it’s a new question.
  • If a new file is detected, it gets loaded and split.
  • Then Ollama creates embeddings, which are stored in Qdrant.
  • Finally, the AI Agent is supposed to search the VectorStore and generate an answer using only the stored embeddings.

The problem:

Although the flow seems to work technically (no errors, the flow runs end-to-end), I’m not getting the expected retrieved results.It looks like the VectorStore is being queried, but it doesn’t return any useful matches or content.

I also wonder::point_right: How would you go about iterating over a list of predefined questions with their own prompts (instead of manual input)?

:link: My Flow:

I’ve based my setup on the following (somewhat older) YouTube example: https://www.youtube.com/watch?v=XQ7wNqbB1x8

And here’s the JSON for my current flow:

I’m using this file:

but with this file I cannot get any answer out of it. I tested with othe PDF files, and it gave me results.

Information on your n8n setup

  • **n8n version:1.83.2
  • **Database (default: SQLite):SQLite
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):default
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):docker in ubuntu 22.04 LTS

any idea’s

1 Like

to do some more testing, I created a new part in the flow that does give me feedback:

This gives me the propper feedback.
This flow with the AI-Agent does not give me the feedback I need:

I think I might be configuring the ai-agent - Qdrant retreaver wrongly.
Any suggestions?

In the end I found the solution, apperantly the node has a flaw to ollama, so I changed the node on the RAG AI agent to the openai one, and then set the API-key to anything and set the Base URL to the local ollama instance.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.