Persisting Session Variables in n8n Workflows for AI Agents Using Postgres Memory

Describe the problem/error/question

I’m building an AI Agent that leverages Postgres for memory storage with a context window of 15 messages. The Agent executes SQL queries on a Postgres database containing a table for products and another for additional product information. In a typical session, when a user says, “Find me a nice phone,” the Agent retrieves a product and displays its details. However, when the user follows up with “I like it. Give me additional information,” the Agent loses the product ID it previously retrieved, which prevents it from fetching the related additional information. I’m looking for best practices to store session variables (such as product IDs) in such scenarios to maintain context across messages.

What is the error message (if any)?

There is no explicit error message; the issue is that the product ID is lost between the steps in the conversation, leading to incomplete data retrieval.

Share the output returned by the last node

The last node output does not include the expected product ID, which results in the Agent being unable to query the additional product information from the database.

Information on your n8n setup

  • n8n version: 1.82.3
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Linux (Docker container)

That’s interesting. I’ve thought about this behavior before, but never gave it enough attention to actually try a solution.

I’ll give some suggestions, but I’d love to hear more from the community:

1. A tool to store retrieved data alongside the memory

Assuming you are using an external storage for the memory, like Redis or Postgres, your can manipulate the entries to add more information.

The AI can have instructions to always store the retrieved information before replying to the user.

Alternatively, you can ask it to store a summary containing the key pieces of information that are more closely related to question the user asked. That way you can store less data and the prompt will be shorter.

2. Use a temporary storage

Instead of using the same long-term session storage, you can use Redis to temporarily store data that was retrieved during a session.

You can then always pass that information to the AI context (prompt).

This is very similar to the first solution. The only difference is your would store the data separately and only temporarily. Keeping the chat sessions dataset a bit cleaner.

:point_right: If my reply answers your question please remember to mark it as the solution

Great ideas! I was thinking about either an improved memory node or a new connector type for the AI Agent node.

Improved Memory Node: For instance, the Postgres Chat Memory currently only lets us specify the table to persist chats. It would be nice if we could define a session object that stores various information between messages.

New Connector Type: Right now, we have Chat Model, Memory, and Tool connectors. If there was a Session connector where we could attach a tool to manage the session object, that could offer even more flexibility.

1 Like

Oh, in that case you can use this node:

If you turn your tool into a sub-workflow and call it with “Call n8n workflow tool” you can make a logic inside it to store the tool output as part of the chat session.

Another option would be to create a subworkflow called “Store tool information”, attach as tool and instruct the AI Agent to always use that tool to store short relevant information for further referencing it during the chat session.

For example:
“use this tool to store small pieces of information extracted from other tools, like product_id, so if the user asks about it again, you already know what is the product he’s talking about”

Would that solve your issue?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.