RAG improvement

Describe the problem/error/question

Hello ! n8n fan here ! I am trying to improve RAG results based in the workflow Ask Questions about a pdf using ai

What is the error message (if any)?

Problem is that PDF documents are usually not properly converted to text. When inserted at the vector database ai outputs are often not including the whole info or model just says cannot answer. In regards to crunching, the best results are with Recursive Character Text Splitter but not reliable enough. I have read about Semantic and Hierarchical crunch. Any direction on how to implement these techniques at n8n would be really appreciatted ! thank you !!

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • **n8n version:1.41.0
  • **Database (default: SQLite):SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):npm
  • **Operating system:Ubuntu 22

Hey @galop,

I have not heard of either of those options so it could be that you will need to use the langchain code node to write a custom function that implements whatever version of that langchain implements.

Do you have any useful links for semantic and hierarchical crunch so I can do some reading to see if there is anything that jumps out?

Hi @Jon thank you for your reply.

Semantic chunking, please check:

Hierarchical chunking seems to be just a proposal that requires far more dev:

Cheers !

The easiest thing to do is to have a larger chunk size – for example, 4000. And have a larger k value – for example, 6 instead of default 4.

And then use a model that has a large context size like Google Gemini Flash 1.5 (1M token size) and is relatively cheap.

This will likely help as the LLM will do a better job when it has more context to work with.

Thank you for the advice, will try !