RAG chatbot results are not relevant

Hi Everyone,

Since few days, I tried to use n8n to create a internal chatbot. However… despite following all the recommandations, the results are very bad…

For information I have only one document with few information in French and i am using Llama3.2 model

I integrate the document into a qdrant collection.

I use the chat with a vector store tool with nomix embed model but it can not retrieve the information.

here is the systeme message :

Summary

You are an intelligent assistant specialized in answering user questions use the provided context.

Your primary goal is to provide precise, contextually relevant, and concise answers based with the provided context.

Here is the workflow :

Did I do something wrong ? Should I use another model ? Why it can not retrieve basic infrmation ?

I would really appreciate some help please

The best RAG solution I’ve seen is from Cole Medin, on YouTube:

Your question will be answered by analyzing the Agent prompt and the logic behing the database, when storing the documents.

He’s even storing metadata and, on the third version of the workflow, he also stores the URL of the file.

Additionally, he has implemented many improvements over normal RAG functionality that makes this RAG solution way above average.

His workflows are available on his GitHub.

In every video he links everything in the description.

I hope this helps! It’s a deep dive in the RAG topic, but I assure you it’s worth it!

.
:point_right: If my reply answers your question, please remember to mark it as a solution.

2 Likes

Thank you very much Solomon.

These videos are amazing, maybe I should understand better the agentic concept of Cole.
Actually I change my AI model and it works very good with Open AI model.
LLama or other available in ollma works very bad to retrieve the information.

If you have some clue about open source model I will be very pleased to have it :slight_smile: