Building the Ultimate RAG setup with Contextual Summaries, Sparse Vectors and Reranking

You’ll probably need to update the collection’s dimension size when creating the collection if you’re using OpenAI embeddings.

  • Cohere’s embedding model has a dimension size of 1024.
  • OpenAI’s text-embedding-small has a dimension size of 1536.
  • You can’t create a collection with one size of vector and then try to save differently sized vectors to it.
{
  "vectors": {
    "default": {
       "distance": "Cosine",
       "size": "1536"  // <-- example for text-embedding-small
    }
  },
  ...
}
1 Like

Thank you, that was it :slight_smile:

Hi,

I went ahead and also tried the local version wthout any changes and the
Qdrant with BM25 ReRank seems to be not properly configured.

it says Query is not defined as an answer to the query

{
  "query": "What is BTC?"
}


query is not defined
query is not defined

Error details
Other info

n8n version

1.62.4 (Self Hosted)

Time

10/11/2024, 3:03:31 PM

Error cause

{}

I tried to understand the code, but i dont get where is the input coming from and the query is obviously there

The issue and fix in the retriever for anyone else.

 const rankedDocs = await retriever.invoke(input);  // Use 'input' instead of 'query'
1 Like

@Jim_Le , I’m struggling with the following error in LangChain Code node:
Cannot read properties of undefined (reading ‘json’) [line 24]

Using latest template (local-only ver) with n8n v1.62.5

UPDATE
FIX of error in line 24:
Do NOT touch stock parameters of " Recursive Character Text Splitter" (size 2000, overlap 0).

@Jim_Le I’m getting a Problem in node ‘Insert Documents with Sparse Vectors‘ Bad Request null error using llama3.2 locally

If I remove this line const res = await client.upsert(collectionName, { points }) the code node completes correctly.

Update the vector dimensionality to 3072. To ensure the collection is created before inserting documents, add the following code at the beginning of your script:

const collectionExistence = await client.collectionExists(collectionName);
if (!collectionExistence.exists) {
  console.log(`Collection "${collectionName}" does not exist. Creating...`);

  const collectionConfig = {
    vectors: {
      default: {
        size: 3072,
        distance: 'Cosine'
      },
    },
    sparse_vectors: {
      bm42: {
        modifier: 'idf'
      }
    }
  };

  await client.createCollection(collectionName, collectionConfig);
  console.log(`Collection "${collectionName}" created successfully.`);
}
1 Like

Hi @Jim_Le,

This is amazing! I learned so much about advanced use of n8n and sparse vectors from this. However, I suspect that there might be a bug.

Currently, the vocabulary for sparse vectors is dynamically generated per item using TfidfVectorizer, which results in inconsistent vector spaces across runs and may lead to misaligned vector representations. This is noticeable as each chunk has different number of indices.
I assume that it should build a shared vocabulary first.

I am not a programmer so hope that it makes sense :slight_smile:

Cheers

1 Like

Hello Jim! Thank you so much for the article! This content is not only highly interesting to me but also fantastic! I am working on a project where I have actually encountered the issue of a lack of accuracy in some cases in my agents’ responses. I started studying techniques like RAG Fusion to try to improve the quality of the model’s responses, but I still need to understand how to implement this technique using n8n.

1 Like

Hello

well i got an error :



if u know what is wrong please ?

1 Like

Can confirm. We update about every 3 or so versions and since then the workflow has been broken. We’re on 1.73.1 as of this moment.
I’ve tried to figure out the cause but did not find it. I can retrieve dense vectors by themselves and the generation of sparse vectors still works. However, retrieving a hybrid of sparse and dense does not work for me anymore. @Jim_Le Do you have any insight as to what might be happening here?

Thanks both for the heads up.

After a quick check, I’ve boiled it down to (and this is my best guess!)

  • Qdrant’s API has stricter schema validation
  • n8n’s “custom workflow tool” has been updated.

For context, I’m currently on and tested using 1.77.0 but this reply should still be relevant to 1.73.0 (I think).

@Issa2024 Unfortunately, I wasn’t able to reproduce the error in your screenshot - my test document (bitcoin.pdf) was inserted into qdrant without issue. My assumption is it may have to do with your qdrant version and best advice is to try and debug the payload.

Try capturing a sample of the points and run this as a query within the qdrant dashboard. If there is an error, it’ll be clearer in the dashboard.

console.log(points.slice(0, 5)); // <-- use a sample output in the dashboard

@Poppi It seems your issue might be related with the new “custom workflow tools” changes. If you locate this line in the retrieval…

- const sparseVector = JSON.parse(await sparseVectorTool.invoke(query));

and append .response to the end?

+ const sparseVector = JSON.parse(await sparseVectorTool.invoke(query)).response;

I’m getting around to updating the templates but no guarantees this week.
Hope this helps!

Hi @Jim_Le, I have a quick question. I don’t know why the TF-IDF Node always crashes in my n8n, even with simple text. Does this happen to you as well?
image