Using Redis VectorStore Search in n8n

Redis is a popular key value database used by some of the biggest companies all over the world for solving a variety of high performance challenges such as job queues and data caching. But did you know you can also use it as a vector store? This may be favoured if your team doesn’t want to introduce another service to an already overloaded stack or prefer self-hosted solutions.

In this article, we’ll go over some examples of how we can implement Redis vector store support into your n8n AI workflows. Before we begin, if you enjoy AI topics, check out my other AI posts on the forum and don’t hesistate to reach out if you have any AI problems to solve! Also follow me on LinkedIn and Twitter/X.

Builtin Redis Nodes

To be clear, n8n does have ready to use built-in Redis support. If these fit your use case, using Redis vectorstore is entirely optional.

  • Redis Node [Docs]
    Handles all general purpose redis operations.
  • Redis Chat Memory Node [Docs]
    Adds long term memory to your Conversational AI Agent sessions.

Prerequisites

  • Self-hosted n8n. The Langchain Code Node is only available on the self-hosted version.
  • Ability to set NODE_FUNCTION_ALLOW_EXTERNAL environmental variable. For this tutorial, you kinda need this to access the Redis client library. For this tutorial specifically, you’ll need to set the following: NODE_FUNCTION_ALLOW_EXTERNAL=redis
  • A Redis Instance. This instance should be reachable by n8n and support vectorstore features. For this article, I’m running my instance locally on my machine as a docker container.
  • Some technical chops! To use Redis as a vectorstore, we’ll have to get hands on with the Langchain Code Node. The task isn’t too daunting but having some code experience will help!

Inserting Docs Into Redis VectorStore

To create our Redis vectorstore node, add a Langchain Code Node to your workflow.

  • Under the “Code” label, click “add code” and choose the “Execute”
  • Under “Inputs”, add the following
    • “Main”, max connections of 1 and required to true.
    • “Embedding”, max connections of 1 and required to true.
    • “Document”, max connections of 1 and required to true.
  • Under “Outputs”, add the following
    • “Main”

Now in the textbox under our “Execute code” option, we first need to establish a connection to the Redis database. We can do this use the Redis SDK which comes installed with n8n. (note: if n8n complains that redis is not found, you may need to add NODE_FUNCTION_ALLOW_EXTERNAL=redis to your environment variables)

const { createClient } = require("redis");
const client = createClient({ url: "redis://redis_db:6379" });
await client.connect();

Next, we’ll use this sdk client with the langchain Redis vectorstore library. This establishes the code compatible with other n8n’s AI component nodes.

const embeddings = await this.getInputConnectionData('ai_embedding', 0);

const { RedisVectorStore } = require("@langchain/redis");
const vectorStore = new RedisVectorStore(embeddings, {
  redisClient: client,
  indexName: "n8n-docs", // change me!
});

Finally, we can use the attached document loader input to process our inputData values and call our vectorstore instance to add the documents to our Redis database.

const inputData = await this.getInputData();
const documentLoader = await this.getInputConnectionData('ai_document', 0);

const processedDocs = await documentLoader.processAll(inputData);
await vectorStore.addDocuments(processedDocs);

Querying Docs From Redis VectorStore

Querying is very similar to inserting; the code is essentially the same but rather than adding documents, we perform a similaritySearch instead.

Start by add a Langchain Code Node to your workflow

  • Under the “Code” label, click “add code” and choose the “Execute”
  • Under “Inputs”, add the following
    • “Main”, max connections of 1 and required to true.
    • “Embedding”, max connections of 1 and required to true.
  • Under “Outputs”, add the following
    • “Main”

We can first establish client and langchain Redis vectorstore as before.

const { createClient } = require("redis");
const client = createClient({ url: "redis://redis_db:6379" });
await client.connect();

const embeddings = await this.getInputConnectionData('ai_embedding', 0);

const { RedisVectorStore } = require("@langchain/redis");
const vectorStore = new RedisVectorStore(embeddings, {
  redisClient: client,
  indexName: "n8n-docs", // change me!
});

Next, we’ll use the vectorstore’s similaritySearch() function. Note, we’ll handle input data as a loop as we could accept multiple items as part of an actual workflow.

const inputData = await this.getInputData();
const results = await Promise.all(inputData.flatMap(async item => {
  const { query, topK, filter } = item.json;
  return vectorStore.similaritySearch(query, topK, filter);
}));

return [{ "json": { "output": results } }]

Using Redis VectorStore as Part of a Q&A Chain (Retriever)

Lastly, using Redis vectorstore for your AI chains and agents is quite simple thanks to the Langchain Code node. This implementation creates a vectorstore subnode which is intended to be used as an input to the VectorStore Retriever subnode.

Add a Langchain Code Node to start

  • Under the “Code” label, click “add code” and choose the “Supply Data”
  • Under “Inputs”, add the following
    • “Embedding”, max connections of 1 and required to true.
  • Under “Outputs”, add the following
    • “VectorStore”

As we’re intending to use this subnode as part of a chain, we only need to return the vectorstore instance and nothing more.

const { createClient } = require("redis");
const client = createClient({ url: "redis://redis_db:6379" });
await client.connect();

const embeddings = await this.getInputConnectionData('ai_embedding', 0);

const { RedisVectorStore } = require("@langchain/redis");
const vectorStore = new RedisVectorStore(embeddings, {
  redisClient: client,
  indexName: "n8n-docs",
});

return vectorStore;

Conclusion

Enabling Redis as a vector store option in n8n could be a nice way to reuse existing infrastructure cutting back on overhead or really drive home to self-hosted approach for those who are have privacy-concerns or are cost-senstive.

Hopefully this article also helps to demonstrate how the Langchain Code node is a really cool way that n8n allows for extending functionality for their platform. The techniques applied here could be replicated for other vectorstore options as well (granted langchain has to support it first!).

Until next time!
Jim
Follow me on LinkedIn or Twitter/X.

3 Likes

Could the same thing be done with other databases (eg: MS SQL)?

Technically, yes. You can see a list of compatible vectostores on the langchain website here: Vector stores | 🦜️🔗 Langchain. Note that some of these vector store plugins are not installed by default in n8n, so you’ll to install them separately.

I’m not seeing MS SQL in that list however, in which case you would have to write your own custom langchain vectorstore plugin.