How to best utilize memory as part of AI Agent Tool to make more reliable?

So I have a workflow that runs daily, which consists of a main AI Agent, and other sub-agents connected as AI Agent Tools. The problem I have is sometimes the sub-agents will be unable to complete their tasks, but most of the time if they just looked at what yesterdays result was that would work also, so I want to add memory, but I am unclear on how memory actually works, such as session-id, etc. Has anyone used memory in this way, and give me a recipe?

Thanks.

To make your AI Agent workflow more reliable using memory:

  1. Use Persistent Memory

    • Store results from each run in a database or file storage.

    • Sub-agents can then look up previous results if they fail or their task is repetitive.

  2. Session / Run ID

    • Assign a unique session or run ID for each daily workflow.

    • Use this ID to link memory entries so sub-agents know which data belongs to which run.

  3. Memory Recipe

    • Step 1: After the main agent completes, save outputs in memory (DB, JSON, or n8n Google Sheet).

    • Step 2: Each sub-agent first checks memory for relevant data using the session ID.

    • Step 3: Only if memory is missing or outdated, execute the task.

    • Step 4: Update memory with new results.

  4. Benefits

    • Reduces redundant computation.

    • Handles temporary failures gracefully.

    • Enables reproducibility of results.

  5. Tip:

    • Include timestamps or validity period for memory entries, so agents don’t use stale data unintentionally.

    • Ensure memory access is thread-safe if multiple agents run in parallel.

Use Postgres chat memory. All agents within the execution will share the same memory, even if each has its own node connected.

In the beginning of your flow, generate a unique session ID (can be based on todays date), which you will later pass to the memory node. At the end of the flow, you can store the output and the session ID in a new row in a Google Sheet.

AI agents will have a Get Rows Sheets tool connected and will be instructed to ALWAYS query for yesterday’s result, before outputting anything.

Here is an example flow:

Hope this helps!

Thanks for the interesting workflow. So in general does memory look like a tool to the LLM?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.