The idea is:
With the introduction of the Langchain Re-ranker connection type ie. NodeConnectionTypes.AiReranker
, it seems like a missed opportunity to not expose it as an output in the Langchain Code node.
My use case:
Cohere is but 1 of many supported re-rankers with langchain support but since they’re the only option right now, it only seems fair that an option exists for other projects to integrate with n8n until official support is catches up.
const { BM25Retriever } = require("@langchain/community/retrievers/bm25");
const retriever = BM25Retriever.fromDocuments(docs, { k: limit });
const rankedDocs = await retriever.invoke(query);
return rankedDocs;
I think it would be beneficial to add this because:
- For self-hosted and local n8n setups, this would allow using locally hosted Rerankers ie. Ollama, BM25 Retriever which do not require access to internet or paid service
- This is for the Langchain Code node enjoyers!
- Flexibility and control over AI tools is what makes n8n so great!
Any resources to support this?
- Building the Ultimate RAG setup with Contextual Summaries, Sparse Vectors and Reranking - In this article, the number of clicks on the local-only version of this RAG template which uses the BM25 retriever is far greater than the cloud version with Cohere.
Are you willing to work on this?
- n/a.