Build a Custom Q&A Chain using Cohere Rerank (Using Langchain Code Node)

This is another quick tutorial in Langchain Code node series. This time we’ll be using the Langchain Code Node to build a custom retriever which will use Cohere’s Rerank API to give us signficantly better results in our RAG-powered workflows.
If you haven’t already, please also check out the previous post in the series!

Edit 1: Replaced hardcoded vector store code with vectorstore subnode.
Edit 2: Add prerequisite to set NODE_FUNCTION_ALL_EXTERNAL

Background

This tutorial is in response to this questions post. At time of writing, there isn’t much documentation on building your own custom retrieval code so here we are.

Prerequisites

  • Self-hosted n8n. The Langchain Code Node is only available on the self-hosted version.
  • Ability to set NODE_FUNCTION_ALLOW_EXTERNAL environmental variable. For this tutorial, you’ll need to enable the @langchain package.
    • set NODE_FUNCTION_ALLOW_EXTERNAL=@langchain
  • Pinecone API Key & VectorStore. I’m using Pinecone out of convenience but if Pinecone is not your thing, feel free to swap it out for any n8n supported vectorstore.
  • Cohere API Key. We’ll need this to access the Rerank API. Grab one from the Cohere developers portal.

Step 1. Add the Langchain Code Node

The Langchain Code Node is an advanced node designed to fill in for functionality n8n isn’t currently supporting right now. It’s a really powerful tool but requires coding know-how; only use it if you know what you’re doing!

  • In your workflow, open the nodes sidepanel.
  • Select Advanced AIOther AI NodesMiscellaneousLangchain Code
  • The Langchain Code Node should open in edit mode but if not, you can double click the node to bring up its editor.
  • Under Inputs
    • Add an input with type “Main”, max connections set to “1” and required set to “true”
    • Add an input with type “Language Model”, max connections set to “1” and required set to “true”
    • Add an input with type “Vector Store”, max connections set to “1” and required set to “true”
  • Under Outputs, add an output with type “main”.
  • Go back to the canvas.
  • On the Langchain code node you just created, add Language Model and Vector Store Subnodes. I’m using OpenAI for both LLM and embedding but you can just any you like (embeddings need to match with what you used to insert the data however).

Step 2. Writing the Langchain Code

For this tutorial, I’ve tried to keep the implementation as close to a standard chat template chain as I can. The introduction of Rerank does introduce a few more steps but I hope it’s not too bad… if you know how to improve it, let me know!

  • Open the Langchain Code Node in edit mode again.
  • Under CodeAdd Code, select the Execute option.
    • Tip: “Execute” for main node, “Supply Data” for subnodes.
  • In the Javascript - Execute textarea, we’ll enter the following code.
    • Be sure to change <MY_COHERE_API_KEY> before running the code!
// 1. Get Inputs
const { chatInput } = this.getInputData()[0].json;
const llm = await this.getInputConnectionData('ai_languageModel', 0);
const vectorStore = await this.getInputConnectionData('ai_vectorStore', 0);
const systemPrompt = 'You are a helpful assistant.';

// 2. Setup Cohere Rerank
const { CohereRerank } = require("@langchain/cohere");
const cohereRerank = new CohereRerank({
  apiKey: '<MY_COHERE_API_KEY>', // Default
  model: "rerank-english-v2.0", // Default
});

// 3. Construct a chain using a ContextualCompressionRetriever
const { createStuffDocumentsChain } = require("langchain/chains/combine_documents");
const { ChatPromptTemplate } = require("@langchain/core/prompts");
const { StringOutputParser } = require("@langchain/core/output_parsers");
const { ContextualCompressionRetriever } = require( "langchain/retrievers/contextual_compression");

const retriever = new ContextualCompressionRetriever({
  baseCompressor: cohereRerank,
  baseRetriever: vectorStore.asRetriever(),
});
const prompt = ChatPromptTemplate.fromMessages([
  ["system", systemPrompt +'\nYou have the following context:\n{context}\n\n'],
  ["human", "{question}"],
]);
const ragChain = await createStuffDocumentsChain({
  llm,
  prompt,
  outputParser: new StringOutputParser(),
});

// 4. use our chatInput in our chain
const output = await ragChain.invoke({
  context: await retriever.invoke(chatInput),
  question: chatInput,
});

return [ { json: { output } } ];

Step 3. We’re Done!

All there is left to do is add the Chat Trigger to the left side of your Langchain Code node and Congratulations! You’ve now successfully built a custom Q&A Chain using Cohere Rerank.

Honestly speaking, this was quite a challenge to put together! Here are my thoughts on the whole process:

  • Developing in the Langchain Code Node is difficult because of lack of autocomplete and debugging tools. To keep your sanity, it’s recommended to setup tighter feedback loops such as using pin data and the manual trigger.
  • Langchain docs are a maze and I ended up relying heavily on google to figure out what I needed. If you’re finding yourself in the same spot, keep at it you’ll get there eventually :slight_smile:

Cheers,
Jim
Follow me on LinkedIn or Twitter.
If you like n8n & AI topics, be sure to check out my other posts in the forum.


Demo Template

3 Likes

Hi Jim,
I just tried your template and I get the following error:

Cannot find module ‘@langchain/cohere’ [line 8]
VMError

Do I have toenable the cohere package manually?

I am running Selfhosted V 1.44.2

Ah my bad, I think you’ll need to set the following docker environment variable to get access to the @langchain package.

NODE_FUNCTION_ALLOW_EXTERNAL=@langchain

For reference, this is what I’ve set my environment variables to…

- NODE_FUNCTION_ALLOW_BUILTIN=*
- NODE_FUNCTION_ALLOW_EXTERNAL=node-fetch,cheerio,@langchain,@pinecone-database/pinecone

I’ll update the tutorial. Thanks!

Amazing!

I am using qdrant and supabase for vector stores. What would be the config for them?

Thank you!

Hi @Jim_Le , thanks for your great posts. I am just getting started with LangChain and find it hard to find good resources on how to properly use it with n8n. I was playing around with AI Agensts (and found they are working quite unpredictable) and then used the “Q&A Chain” node wich worked better. But this comes with restrictions such as the missing chat memory. So I tried to set up a custom LangChain Node with code and tried to adapt your example for a chat including a Qdrant Db. However i run into errors which I do not understand. The code is quite similar to yours but I removed the Cohere part since I want to be 100% local.

My code

const systemPrompt = 'You are a helpful assistant.';

// Handing over the chat input of the user
const { chatInput } = this.getInputData()[0].json;
console.log('Chat input: ', chatInput);

// Attaching the llm
const llm = await this.getInputConnectionData('ai_languageModel', 0);

// Attaching the vector store and mapping the retriever
const vectorStore = await this.getInputConnectionData('ai_vectorStore', 0);
const { VectorStoreRetriever } = require('@langchain/core/vectorstores');

const retriever = new VectorStoreRetriever(vectorStore);

// Designing a prompt that later will be populated with the actual content.
// For now we see some placeholders like "context" and "question" in here.
// These will be replaced when the chain is invoked by giving the values as parameters
const { ChatPromptTemplate } = require("@langchain/core/prompts");
const prompt = ChatPromptTemplate.fromMessages([
  ["system", systemPrompt +'\nYou have the following context:\n{context}\n\n'],
  ["human", "{question}"],
]);

const { createStuffDocumentsChain } = require("langchain/chains/combine_documents");
const { StringOutputParser } = require("@langchain/core/output_parsers");

const ragChain = await createStuffDocumentsChain({
  llm,
  prompt,
  outputParser: new StringOutputParser(),
});

// Invoking the chain and handing over the relevant information to work on
// This is replacing the placeholdes defined in the prompt template
const output = await ragChain.invoke({
  // I think this part is asking the vector store for information
  // using the question of the user
  context: await retriever.invoke(chatInput),
  // After asking the vector store the same question + context is asked to 
  // the base system
  question: chatInput,
});

console.log(output);

return [ {json: { output }}];

The error I run into is Expected a Runnable, function or object. [line 28]where the createStuffDocumentsChain expects a function in one of the parametrs but can not find it. How does this work in your example? What I see is that you are using the same arguments (and apperantly it is working?).

Extended error message:

Error: Expected a Runnable, function or object. Instead got an unsupported type. at _coerceToRunnable (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1857:15) at Array.map (<anonymous>) at Function.from (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1299:44) at createStuffDocumentsChain (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/combine_documents/stuff.cjs:28:41) at ReadOnlyHandler.apply (/usr/local/lib/node_modules/n8n/node_modules/@n8n/vm2/lib/bridge.js:490:11) at /usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Code:28:24 at processTicksAndRejections (node:internal/process/task_queues:95:5)

My workflow:

I would greatly appreciate your thoughts on this :slight_smile:

Hey @hbertsch :wave:

If you really want to learn langchain, it’s probably better to learn it in a code environment first ie. with python or javascript. n8n has to bring some subtle changes into the mix to make it all work and because it’s mostly hidden, it’s likely you’ll miss important concepts of langchain.

Regarding your code, you can fix it by replacing the following lines:

// Attaching the vector store and mapping the retriever
const vectorStore = await this.getInputConnectionData('ai_vectorStore', 0);
const retriever = vectorStore.asRetriever();

I don’t recommend you do it this way however since AI Agents do have the Vector Store tool which serves this purpose. See my review here: Review: Vector Store Tool (+ Langchain Code Alternative!)

Hi @Jim_Le , thank you for taking a look into this.

I directly tested your suggestion and substituded the code. Now I get [ERROR: vectorStore.asRetriever is not a function [line 12]]. Maybe I am missing a package or something? My instance has the following available: N8N_NODE_FUNCTION_ALLOW_EXTERNAL=langchain,openai

If you really want to learn langchain, it’s probably better to learn it in a code environment first

Yes, I guess you are correct. I noticed that the n8n implementation is different. I will try it “raw”.

The reason why I strated doing it in n8n was an issue with an AI Agent that I could not understand and resolve. Can you check this new ticket? I described it in here.

Many thanks!