Hi everyone,
Asking for ideas and inspiration to solve an issue with a workflow I have with n8n.
I am using the OpenAI component to generate some content given a prompt and feeds it to subsequent analysis.
My issue is the following: Because I do not have any way to keep a short memory of previous runs, the workflow generate 98% similar content (ideas at least, or topic) on each run, my issue is that I do ot have any way to provide a memory to the model in order to instruct the model not to be repetitive in each time the workflow triggers.
I am looking for ideas or inspirations to make this happen in n8n, any thoughts ?
Rad
Hi @Rad, perhaps you want to have a look at the langchain functionality currently in beta? It comes with support for various memory sub-nodes, check our docs for a full list.
That is so cool @MutedJam but which version is this available in ? am using 1.9.3 and cannot see this node anywhere
That is not part of the regular version yet. The following Docker image has to be used: docker.n8n.io/n8nio/n8n:ai-beta
.
More information can be found on:
https://docs.n8n.io/langchain/
https://n8n.io/langchain/
Thank you so much @jan
I am trying to persist somehow the memory to a redis chat memory, would you have any idea on how to configure connection to this one ? My docker compose doesn’t contain any redis at all so I am wondering whether this should be added ahead of time to the compose file ?
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.