Memory persistence issue with tool_calls in the AI Agent node
I’ve noticed inconsistent behavior in the n8n native AI Agent when it interacts with tools.
It seems that the results of tool calls (tool_calls) — including returned values such as IDs — are not properly stored in memory.
In practice, this means the agent “forgets” information that it previously generated during the same conversation.
Real-world example
Here’s a simple example that shows the issue:
The agent executes a tool that creates a new contact in the database and returns a contact ID.
A few messages later, I ask the agent to add an email address to that same contact.
At this point, the agent no longer remembers the ID returned by the previous tool call, as if it never existed.
As a result, the workflow has to perform a new manual lookup to retrieve the ID — which adds unnecessary steps and can cause duplication, data inconsistencies, or even flow errors.
Side effects
This behavior causes serious issues in any automation that depends on tool state or data continuity, such as:
Updating records that were previously created;
Running multi-step flows where tools depend on each other’s output;
Maintaining conversation context across multiple user messages.
Without persisting the tool call results, the agent loses its logical continuity and cannot reason effectively across messages.
Why this matters
The whole point of having an AI Agent with memory is to maintain context across interactions.
But if the results from its tools — the key actions it performs — aren’t remembered, then the memory feature loses its purpose.
For anyone using stateful tools (like Supabase, external APIs, or CRMs), this becomes a critical limitation.
Even simple workflows break down because the agent no longer has access to data it generated earlier in the conversation.
Curios about the system prompt you have on your AI model.
I am saying that couse I have an agent that has +10 tools(each calls a subworkflow), and for test I use simple memory(5 messages).
If I Don’t chain the actions in the system prompt and specify how the data can be used (present/future + RAG), I observed same behavior.
Edit:
Since enabling the Return Intermediate Steps option in the AI Agent node allows you to access the raw outputs of tool calls, including IDs, names, and other returned values. These intermediate steps are included in the agent’s output as an additional field (usually called intermediateSteps), which you can then process in subsequent nodes.
From there, you can extract the relevant information (such as tool IDs or names) and store it in your own persistent storage (like Supabase, Postgres, or Google Sheets) for later retrieval. You can also use this data to dynamically build a better system prompt or inject context back into the conversation as needed.
The problem is that the tool’s returns, such as IDs in particular, are not saved in the database. And from what I’ve seen, the n8n team is already aware of this. For example, a tool’s query only saves the text it queried, but the tool itself doesn’t save it. ex:
{
“type”: “ai”,
“content”: “Great, Harry Dev.! Let’s choose a service for you. Here are our options: - Haircut - Girl’s Nails - Beard Shave - Foxy Eye Super Master Blaster Lashes - Eyebrows - Which one would you like to schedule? ”,
“tool_calls”: ,
“additional_kwargs”: {},
“response_metadata”: {},
“invalid_tool_calls”:
},
I use Gemini(2.0 flash), and in the tool i toggle the return intermediate steps, and at the end of workflow runtime i get info from the tool calls as well, the resultsare sent to telegram message with all the details (now i don t know why you are not able to save that info and where , so the model is “aware” next time.).
Edit: and is not hallucinating at all, since i can controol in calendar and the sheet for info i pull from other API`s .
And yes, if i ask again to do that, or half of the commands (with simple memory), he is aware of the steps done earlier(even with thte details of transcript).