🦜⛓️ LangChain - Memory + Chat

Actually disagree. Having it all in one platform will provide a lot of value especially in combination with all the integrations we created and all the work we put into the building experience over the last year’s. I am sure it will be a big step up to already existing projects.

2 Likes

I’ve seen demos and played with the Alpha. I believe there’s value in combining n8n’s capabilities and langchain. And that’s exactly why we’re excited to launch this BETA as detailed feedback after hands-on use will be invaluable in shaping this feature moving forward. :pray: :robot:

While n8n+langchain may not be as useful for simple use cases where it just orchestrates flowise, the langchain use cases I’ve been exploring involve integrating traditional automaton and AI steps into a single process. Interwoven to solve more and more complex usecases. I’m also confident it will make it more intuitive to build and maintain AI enhanced workflows/ automated processes.

I think n8n has a good shot at democratizing technologies like LangChain, similar to how we’re doing with coding and APIs. And that is a part of our core mission - giving people technical superpowers.

I’ll end with one specific example: I still don’t fully get vector databases, I do… but I’m shaky. I’m learning, but I don’t think everyone should have to (or can). As a product designer, I’m fairly confident we can make nodes that abstract away much of that complexity (and still have the building block nodes for the roll-it-yourselfers). If we can solve enough of those tricky parts (like n8n does today for auth), there’s a lot more people build ever more powerful workflows. And that’s worth exploring I reckon :rocket:

6 Likes

Maybe you could implement something like this:

A node that uses Supabase as a vector database. Firstly, for embeddings, use OpenAI.

@Kaol Ah yes agree. A node for that got already implemented.

Great news. We launch the Beta version today on ProductHunt:
https://www.producthunt.com/posts/n8n-langchain-integration
We obviously really appreciate all upvotes and help with spreading the word.

You will find also more information about the LangChain Beta here:
https://n8n.io/langchain/
https://docs.n8n.io/langchain/

6 Likes

IT looks pretty cool, can’t wait to play with this using my local docker image when it comes out.

I had a little play with the beta online and I think I need a little education on some of the services and getting used to the inputs and outputs, but I am very excited with where this is going.

1 Like

Opened a template, removed the pinecone and added a local vector store but am unable to drag a connector to connect them?

i understand in the beta circles connect to circles, and diamonds to diamonds, but how do we use an in memory vector store?

Generally always best to open a new topic for issues, just that it does not get to messy and things can be found easier.

To answer your question.

  1. Not totally correct. The diamond ones can only be connected to the correct diamond ones. So a model input can only connect to a model output. It is not possible to connect a model output to a memory input.
  2. Yes it really looks like something is wrong here. The node has a connector that does not get used anywhere else anymore and got replaced by another one. But even if that gets fixed does it still not make sense as there should normally be a node to read data and another one to load it. Did not create that node so will check with the person that did and then follow up here.

@RedPacketSec Please pull the latest Docker image. There are now be two new nodes that behave as they should.

2 Likes

Truly, great news. Thank you so much! Having LangChain-power inside N8N is awesome in my mind. Are there any plans for supporting streaming and be able to pass the streaming on to RespondToWebhook?

1 Like

Welcome to the community @Fredrik_Ekelund!

Thanks, that is great to hear. Yes, that is planned. Sadly do not have an ETA right now.

3 Likes

Hi @jan, I was wondering about the structure of the n8nio/n8n:ai-beta docker image:

  • Does is contain all functionality of the standard n8n image just adding LangChain functionality?
  • Is there a possibility to integrate the LangChain functions into the latest image soon? (Maybe marking them as a preview if not fully stable)
  • Is it possible to just switch from standard n8n to n8n-ai image when a stable one is released?
  • Is there any rough timeframe for a stable ai release?

I really want to get started on that inside n8n and have been using flowiseai for a time now, seeing stuff that n8n has overcome already for a long time I am not really forward to continue that 2 line path.

Hey @prononext,

There have been a few posts on this now :slightly_smiling_face:

The LangChain beta is built on top of the normal n8n build so the same functionality will be there but it will be a bit older so not all the latest features will be there.

We will be merging the Langchain features into the main release once we are happy with the beta, we have already started working on some of this in the background.

It will be one image so you wouldn’t go from n8n to n8n-ai but you might want to go the other way from the ai beta to the main n8n release which should be ok we will need to do testing for this when the time comes.

I guess a stable release depends on how you define stable but I did see a post the other day where it would be in the main release in a few weeks.

Hopefully this helps.

3 Likes

We actually try to merge “master” into the “ai-beta” branch regularly. So it should most time offer all the features of the “latest” release, sometimes even a little bit more (as we merge in at “random” times and not only after “latest” got released). Obviously, this means that there is an even higher chance of bugs in that version.

2 Likes

I’m using the ai-beta, be good to know when I can swap back to the main master :wink:

As soon as the merge has happened we will let everyone know :slight_smile:

2 Likes

New version [email protected] got released which includes the GitHub PR 7336.

6 Likes

What happened to ARM support? Tried to update in locally via npm on Mac and settings some errors.

@Kool_Baudrillard This has nothing to do with this feature request. Please open a new topic about that. Thanks!