I’ve searched through the forum but haven’t found anyone facing quite the same issue—so I suspect I’m missing something obvious.
I’m trying to build a chat workflow where a Chat Trigger node is connected to an Information Extractor node, which is set up to extract a person’s name from the message.
Example:
Human: Where does John Doe live?
Information Extractor: Person name: John Doe
Human: What is his mobile number?
Information Extractor: Person name: Unknown
As you can see, the extractor doesn’t retain context from the previous message. I’ve tried adding a Memory node (both Simple Memory and Postgres Chat Memory) connected to the Chat Trigger, but the Information Extractor still doesn’t receive prior messages in the session.
Is there a specific way to pass chat history or session memory to the Information Extractor? Am I missing a key node or configuration?
Any help would be much appreciated!
Information on your n8n setup
n8n version: since I am running on n8n cloud, hence 1.97.1 ?
To zoom out a bit, my use case is to build a conversational AI agent that can answer questions based on information stored in a MySQL database. At the moment, I’m focusing on just three aspects of the entities in the database: notes, feedback, and experience.
Here’s an example of the kind of conversation I’m aiming for:
Human: Can you summarise the notes we have on John Doe?
AI Agent: Here is the summary of the notes we have on John Doe: some notes
Human: What feedback do we have on him?
AI Agent: Here is the feedback we have on him: some feedback
Human: What are the number of hours Mary has teaching Maths?
AI Agent: Mary has logged 100 hours teaching Maths.
As you can see, the agent needs to maintain context between messages, and so when a user refers to “him” or “her” in follow-up questions, it correctly resolves the pronouns using prior conversation history.
That’s why I’m trying to understand how to persist chat memory in n8n across multiple messages in a session. I suspect I might be missing a key node or a particular setup to make that happen.
Happy to share more details or workflow if helpful!
Thanks for zooming out for me, now I totally understood your problem,
Actually it is a much simpler thing to achieve (Thanks to n8n to make it that easy)
I have create the sample workflow for you to achieve the same task you desire…
So now when the user will send the question it will work like this :-
[THIS EXAMPLE I HAVE GENERATED USING AI TO MAKE YOU UNDERSTAND IN EASY MANNER]
Example Conversation + AI Behavior Breakdown
1. User Message:
Human: Can you summarise the notes we have on John Doe?
AI Agent’s Internal Thought Process:
Intent: Fetch and summarize notes for a person named “John Doe”.
Entity: Name = John Doe
SQL Generation:
SELECT note FROM notes WHERE person_name = 'John Doe';
(Assume this returns 3 notes):
“John is punctual and communicates clearly.”
“Needs improvement in time management.”
“Great engagement during group discussions.”
Summary Response:
AI Agent:
Here is the summary of the notes we have on John Doe:
John is punctual and communicates clearly. He actively participates in group discussions, though there is some feedback about improving time management.
2. User Message:
Human: What feedback do we have on him?
AI Agent’s Internal Thought Process:
Intent: Retrieve feedback on the last person mentioned → “John Doe”
SELECT feedback FROM feedback WHERE person_name = 'John Doe';
(Assume this returns):
“Students find his sessions helpful.”
“Some concerns about responsiveness to emails.”
Summary Response:
AI Agent:
Here is the feedback we have on John Doe:
Students generally find his sessions helpful, although there were a few concerns regarding email responsiveness.
and so on, similarly the conversation will go on…
Write now I have the agent to write the whole query for the demo purpose, but that is not secure as, may be if some user ask to delete the table AI agent will also perform this
So in order to solve this you can combine the static query (Select table name etc) and just ask the dynamic things from AI
Similarly you have to provide database schema before hand to the AI Agent so that based on the schema it can provide the appropriate query to fetch the data.
YUP! that’s it my friend hope this helps…i have to stop writing now! LOL
Thank you so much for taking the time to not only explain but also build and share the sample workflow with me. It really helped to see everything configured step by step.
I have to admit, I was initially quite skeptical about whether the AI model would actually be able to convert user requests into SQL on its own. But I was pleasantly surprised and it worked really well! I think what made the difference was the system message you provided in the AI agent and that gave it the right guardrails. Plus, the fact that you shared the actual workflow via the n8n community plugin made it super easy to follow and replicate. Big win there!
What also clicked for me was how the memory node attached to the AI agent helps it remember previous chats, that really unlocked the conversational flow I was aiming for.
Quick question though: sometimes I’ve noticed that the AI agent skips generating the SQL query and just responds straight away, almost as if it’s hallucinating the answer without querying the database. Is there a way to enforce or nudge the agent to always generate and run a SQL query first before answering? Would love to hear if you’ve come across this or have any tips to handle it.
Happy to hear that it worked just the way you thought it would, my friend.
Yes to stop hallucination, you can take care of this things:-
#First : Play with LLMs additional options like temparature, top p etc [I’ll share the video at the end for this, you’ll understand what needs to be done]
For example: decreasing the temperature, will let AI agent to give the deterministic output and it will stop hallucinating or decrease that…
Second : You can also try and use the think tool nodem try this if it works for your usecase…
I believe that experimenting with LLM options usually solves the problem of hallucination so go ahead…