Hi @Joe_B
If you want to implement an AI Agent with long term memory, you cannot use simple memory. In that case, to enable your AI Agent to refer back to past issues meaningfully, you can implement conversation history techniques or RAG (Retrieval-Augmented Generation)
1. Use a Persistent Memory Store
- Instead of just a short buffer, store memory in a long-term memory system:
- Database (PostgreSQL, MongoDB, Redis)
- Vector store (embedded semantic memory via Pinecone, etc.)
- n8n supports memory via MongoDB Atlas Vector Store or built-in memory nodes for this purpose.
2. Retrieve Similar Past Interactions
- Use a Get / Query node prior to the AI Agent to retrieve relevant past issues.
- If using embeddings, perform a similarity search to fetch past conversations similar to the current query.
3. Include Retrieved Memory in Prompt Context
- Use a Function or Set node to compile the memory into the AI prompt:
Past interactions:
[Insert summaries or transcript items...]
New inquiry:
[User’s new message]
- Feed this into the AI Agent node so the model can reason using historical context.
4. Save New Interactions Back to Memory
- After AI answers, route data through a Write or Insert node to store the new Q&A for future recall.
Have a look at this tutorial from the n8n Team and see if that helps