The way the AI Agent node saves messages into memory is flawed when it comes to tool calls.
The tool calls and the tool responses appear not to be saved in the memory as per my testing.
This is a major issue. Here are some real examples of issues:
An LLM that sees that it previously responded to a question by hallucinating a response instead of checking with a tool call (which it actually did, but that tool call is not saved in the memory).
=> This will lead to the LLM actually hallucinating a response instead of using a tool to verify.
If a stateful tool returned important information and the user asks one message later about that information, the AI Agent canât answer the question, as the Tool response is not visible anymore and rerunning it will lead to a different response or might trigger processes again.
And I could name many more examples.
I can not stress enough how critical it is to have the tool calls and the tool responses saved to memory.
Context
I usually code my AI Agents in python, but I really like the AI Agent node simplicity, especially with memory.
But this problem makes it unusable in complex long live conversations where the agent has tools.
Would really love for this to change or be a âOptionâ I can set.
I do understand that this is a bit difficult, because different LLM providers want different tool JSON structures, but I think itâs needed.
Would love to hear any feedback.
I can also help with this if required.
Hey @Merlin_Richter
You have a good point there, but memory nodes that are attached straight to Agent are there just to store chat history (messages between agent and the user).
For more complex scenarios you are free to update your chat history with additional data and you have few options for that:
If you are using external service, like Redis, you can use tool outputs and store them manually using Redis node
Use built-in memory manager node to enrich your chat history in a similar way
You will also probably want to make sure that all this data is fed back to the agent with every subsequent message.
Hi @milorad , just a quick follow up on this topic, if possible.
What would be the pattern to use the memory manager node to store the tools responses? I donât see where in the workflow we can plug it in a way that we can retrieve the tool response and insert it on the memory⌠Maybe Iâm missing something here?
@milorad do you have an example of how to set this up, I am using MongoDB for the memory and have the same issue and would like to store the tool output back in the history so It can keep track as it does multiple calls?
I became so annoyed about that problem that I created a new AI Agent node that fixes it. It correctly saves Tool and Tool Result messages to the conversation history.
Same issue here. Needed to save the tool calls (input-output) for follow up checks later in de conversation. Putting it in the chat history is just not pretty or user friendly. And pretty heavy on memory thatâs not intented for that purpose. Happy to check the âbetter agentâ. Thanks @fjrdomingues
Man, great job. I tried your node, and the issue seems to be caused by the ai-sdk library not supporting the new version of the large model. I saw this kind of prompt in the Docker container logs. So the problem isnât with the generation of prompt messages, but rather with the model support. Attached is an error message snippet from the logs.
Tool call failed (attempt 1/5): APICallError [AI_APICallError]: models/gemini-pro is not found for API version v1beta, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.
at /home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/@ai-sdk/google/node_modules/@ai-sdk/provider-utils/src/response-handler.ts:59:16
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at postToApi (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/@ai-sdk/google/node_modules/@ai-sdk/provider-utils/src/post-to-api.ts:111:28)
at GoogleGenerativeAILanguageModel.doGenerate (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/@ai-sdk/google/src/google-generative-ai-language-model.ts:239:9)
at fn (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/generate-text/generate-text.ts:321:30)
at /home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/telemetry/record-span.ts:18:22
at _retryWithExponentialBackoff (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/util/retry-with-exponential-backoff.ts:37:12)
at fn (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/generate-text/generate-text.ts:281:32)
at /home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/telemetry/record-span.ts:18:22
at ExecuteContext.execute (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/dist/BetterAiAgent.node.js:593:34) {