What do you use to have a RAG-based knowledge base in your n8n AI LLM workflows?

What is your preferred way to provide a knowledge base to your AI LLM workflows in n8n?

For instance, suppose you want to give some context to the model, normally you’d connect a knowledge base to it, so then using RAG / GraphRAG it would generate some text chunks for you, which you can then integrate into the model’s prompt.

What do you end up using the most?

Is there an internal n8n KB tool or you like to use an external one? What is it?

Thanks!

1 Like

check this video out

Also, most people use like superbase, etc

They use a flow to store the data, and then just connect the embedding to the aiagent etc.

Some monitor folder in google drive and add to vector store, or provide websites.

Hope this helps

Ok, this seems like a solid solution.

But I’m wondering if people use something a bit easier where they don’t have to set up so many steps in between? Just to be able to query a knowledge base and get a response?