This isn’t a bug in PG Memory it’s a context scoping issue.
Your PG Memory is mixing conversations because all executions are writing to / reading from the same memory scope. Without a unique identifier, PG Memory treats every run as the same conversation.
Why it happens
No unique session_id / conversation_id / user_id
PG Memory retrieves embeddings globally instead of per conversation
Result: unrelated past messages leak into the current context
How to fix it
Generate or pass a unique conversation key (e.g. WhatsApp number, user ID, thread ID)
Store it with every memory write
Filter memory reads by that same key
Example approach:
Use a Set node before PG Memory:
conversation_id = {{$json.from || $execution.id}}
Configure PG Memory to store + query by that ID (metadata / namespace)
Once memory is scoped per conversation, the context will stop getting mixed up.
This is expected behavior if memory isn’t partitioned PG Memory is doing exactly what it’s told to do.
Thanks for sharing this that screenshot helps a lot.
You’re actually very close. The problem isn’t that session_id is wrong, it’s how the memory is being written and then queried, which can still cause mixed results even when the same phone number is used.
I can help you untangle this.
What I’d like to check next (and can walk you through step by step):
Exactly where the session_id is first created in the workflow
Whether every memory write includes that same session_id (no missing or fallback values)
The ORDER and LIMIT logic in the Postgres query (this is often where older messages from other runs sneak in)
Whether multiple workflows are writing to the same table using the same structure
Once those are aligned, the Postgres node will reliably return only the correct conversation history for that phone number.
If you’re open to it, feel free to share the workflow (or just the Postgres node + the node before it), and I’ll help you pinpoint exactly what needs adjusting.