Hi,
i’m trying to build a chatbot that has a persistent memory. I’m using both redis (for fetching messages fast) and postgres (for persistent memory).
What i’m trying to achive is:
- User sends a message from the webhook.
- Workflow tries to fetch the session from redis, and if it’s expired, tries to get it from Postgres and loads all messages of the user on redis.
- The user chats with the agent that saves the messages both on redis and postgres.
The problem i’m having:
I really can’t find a way to do so. I’ve tried different things like the chat memory manager, redis push to list and others. I’m getting something wrong probably, but i cant understand what.
Thanks in advance.
1 Like
"Hey @AndraoCarloIntergea ! This is a pro-level architecture, but the reason you’re hitting a wall is that n8n’s AI Memory nodes aren’t built to ‘failover’ from Redis to Postgres internally.
To make this work, you need to handle the ‘Hydration’ phase (loading the cache) before the data reaches the AI Agent. Here is the best way to structure it:
1. The Hydration Logic (Pre-Agent)
Don’t let the Agent manage the backup; do it manually at the start of the workflow:
-
Check Redis: Use a Redis Node to GET the session.
-
IF Node: Check if the Redis result is empty.
-
Fetch Postgres: If empty, query your Postgres Node for the last 10–20 messages for that chat_id.
-
Seed Redis: Use another Redis Node to SET or LPUSH those Postgres results back into Redis. Now your cache is ‘warm.’
2. The Agent Execution
Now that you’ve guaranteed the data is in Redis:
-
Use the AI Agent node.
-
Attach a Window Buffer Memory node and set its backend to Redis.
-
Because of Step 1, the Agent will now consistently find the history in Redis.
3. The Permanent Save (Post-Agent)
- After the Agent node, add one final Postgres Node to append the latest user message and the AI’s response to your long-term table.
Pro Tip: When fetching from Postgres to Redis, make sure you format the data into the specific JSON structure the AI Memory node expects (usually a list of user and ai roles), otherwise the Agent might ignore the history.
Hope this helps you get the persistent memory running smoothly!"
Thanks to your advice, after some tries i finally managed to do it.
Here i’m leaving an explanation that i hope may help others.
The way n8n agent’s save messages is by storing them in a specific format based on LangChain JS.
Here you can see it by opening the redis node connected to an AI Agent:
If you want to make the agent understand the messages you are storing inside of redis, it’s obvious to say that you have to store them in this format.
I managed to do so by using this structure:

You retrieve them from your database, you convert the output of the query in the LangChain JS format and then you push them into a Redis List (you cant use a Set because agents use a list to read the messages.)
This is the code node i made using Claude (it may differ slightly based on your flow structure):
const rows = $input.all();
return rows.map(item => ({
json: {
message: JSON.stringify({
type: item.json.role,
data: {
content: item.json.content,
additional_kwargs: {},
response_metadata: {}
}
}),
session_token: item.json.session_token
}
}));
By doing this, you can then use your stored messages in Postgres to feed your AI agent with the redis memory chat node.