I’m facing an issue while building an LLM workflow in n8n.
Context:
I am using:
Supabase (cloud Postgres) to store chat memory via Postgres Chat Memory.
Postgres Tool.
AI Agent (LangChain) for handling user messages.
Problem:
The AI Agent keeps replying with the same message every time, as if it does not see the chat history. It seems like it is not reading data from Postgres Chat Memory, although the data is being recorded correctly in Supabase.
What I tried:
Reconnected Supabase credentials.
Changed Session ID keys.
Verified that data is correctly saved in the Supabase memory table.
Tried using Postgres Tool and Vector Store in combination, but got the same result.
Questions:
How can I correctly connect Postgres Chat Memory (Supabase) to AI Agent so it actually reads the chat history and considers previous messages?
Do I need to set any specific parameters or call memory in a certain way in the AI Agent to load history properly?
I would greatly appreciate any help in investigating this issue, as it’s important for implementing reliable long-term memory in a production assistant.
hey @nsanovdias06 hope you are doing well.
i faced same issue once and adding details to memory and tool along with optimizing system prompt worked.
give it a try and let me know
I’ve been trying to solve this issue for several days now.
I tried specifying in the system message and in the prompt that the agent should use the tool to retrieve memory, but it only works inconsistently. I also instructed in the prompt that it must always use memory, but could not get stable results. Later, I tried connecting it to Zep, and there it was enough to set memory in the AI Agent node, and it started working, which seems strange.
If it’s not too much trouble, could you please share working examples of flows where memory is functioning reliably with the AI Agent and Postgres/Supabase Chat Memory?
I would like to understand a reliable pattern for connecting memory and prompting to achieve stable memory usage in production.
as sessionKey in the Postgres Chat Memory node,
so it matches the user’s Telegram chat ID consistently across sessions.
The Postgres Chat Memory is connected to the AI Agent via ai_memory. I also connected the OpenAI Chat Model via ai_languageModel. The flow sends the agent’s response back to the user in Telegram.
Despite this setup, the AI Agent still often ignores the memory and responds with the same message repeatedly, as if it is not reading the memory context properly, even though the records are saved in Supabase.
Could you please check if this structure looks correct to you, or if there’s something I’m missing in connecting the memory with the agent to ensure it always reads memory before responding?
I see you have added very large instructions in user prompt which I have not seen before.
you should only give user message and make system prompt like:
You are a friendly admin of the “CyberFox” computer club. Always return STRICT JSON in the format:
{
“action”: “booking | cancellation | question | other | extension”,
“text”: “”,
“data”: {
“date”: “”,
“start_time”: “”,
“end_time”: “”,
“pc_type”: “”,
“name”: “”
}
}
Use conversation history to track booking details and continue from the last step. For bookings:
Calculate date (YYYY-MM-DD) for “today” or “tomorrow” (Europe/Moscow timezone).
Convert start_time to HH:mm.
Calculate end_time as start_time + duration (e.g., “3 hours”).
Set pc_type from user input (e.g., “regular PC”).
Set name from user input.
Include the data object only if ALL fields (date, start_time, end_time, pc_type, name) are complete. If any field is missing, do not include the data object and ask for the specific missing fields (e.g., “Please provide your name and session duration”). For price/service questions, respond: “Regular PC: 100 RUB/hour, RTX 4080 PC: 150 RUB/hour, VIP booth: 250 RUB/hour. You can book a time slot anytime!” Respond conversationally and end with: “Is there anything else you need?”
check database it is saving memory correctly and try reducing Context Window Length.
The database table consists only of the fields id, session_id, and message; this is the default n8n table.
In the session_id field I pass the Telegram chat_id.
Here is an example of the message column content:
{
"type": "human",
"content": "You are the administrator of the CyberFox club. Respond only in JSON:\n{\n \"action\": \"booking|question|other\",\n \"text\": \"reply to the user\",\n \"data\": {\n \"date\": \"YYYY-MM-DD\",\n \"start_time\": \"HH:mm\",\n \"end_time\": \"HH:mm\",\n \"pc_type\": \"PC type\",\n \"name\": \"name\"\n }\n}\n\nIMPORTANT:\n- Include the \"data\" object ONLY if all five fields are provided\n- If name, time, or date is missing — request it in the \"text\" field\n- \"tomorrow\" = 2025-07-16, \"today\" = 2025-07-15\n\nPrices: Standard PC — ₽100/h, RTX 4080 — ₽150/h, VIP — ₽250/h",
"additional_kwargs": {},
"response_metadata": {}
}
Unfortunately, the memory issue was never resolved. After numerous attempts and experiments, it became clear that the problem most likely lies specifically in the Telegram + PostgreSQL combination.
Problematic scenario:
Telegram + PostgreSQL = memory doesn’t work
Bot “forgets” previous responses every time
SessionKey is configured correctly, but data is not saved
the issue isn’t with your workflow or the ai agent. it’s just that the session key is changing every time you execute the workflow. Assume the following;
123 is our current session key.
456 is the new session key after execution again.
initially, in the database, what is saved is that, for session key “123”, his name is Bob. When you re-execute, it will go to database and query: what is the name of session key “456”. It’s not there.
In the postgres node, instead of pointing the session id to the trigger, simply manually set it to a static session key, this way it will always read and save with that id.