Issue: Tool Calls not beeing saved to Memory

Issue

The way the AI Agent node saves messages into memory is flawed when it comes to tool calls.
The tool calls and the tool responses appear not to be saved in the memory as per my testing.

This is a major issue. Here are some real examples of issues:

  1. An LLM that sees that it previously responded to a question by hallucinating a response instead of checking with a tool call (which it actually did, but that tool call is not saved in the memory).
    => This will lead to the LLM actually hallucinating a response instead of using a tool to verify.

  2. If a stateful tool returned important information and the user asks one message later about that information, the AI Agent can’t answer the question, as the Tool response is not visible anymore and rerunning it will lead to a different response or might trigger processes again.

And I could name many more examples.

I can not stress enough how critical it is to have the tool calls and the tool responses saved to memory.

Context

I usually code my AI Agents in python, but I really like the AI Agent node simplicity, especially with memory.
But this problem makes it unusable in complex long live conversations where the agent has tools.
Would really love for this to change or be a ‘Option’ I can set.

I do understand that this is a bit difficult, because different LLM providers want different tool JSON structures, but I think it’s needed.

Would love to hear any feedback.
I can also help with this if required.

Information on my n8n setup

  • n8n version: 1.84
  • Database (default: SQLite): simple memory default
  • Running n8n via: webapp
1 Like

Hey @Merlin_Richter
You have a good point there, but memory nodes that are attached straight to Agent are there just to store chat history (messages between agent and the user).
For more complex scenarios you are free to update your chat history with additional data and you have few options for that:

  1. If you are using external service, like Redis, you can use tool outputs and store them manually using Redis node
  2. Use built-in memory manager node to enrich your chat history in a similar way

You will also probably want to make sure that all this data is fed back to the agent with every subsequent message.

Hi @milorad , just a quick follow up on this topic, if possible.

What would be the pattern to use the memory manager node to store the tools responses? I don’t see where in the workflow we can plug it in a way that we can retrieve the tool response and insert it on the memory… Maybe I’m missing something here?

Thanks

1 Like

@milorad do you have an example of how to set this up, I am using MongoDB for the memory and have the same issue and would like to store the tool output back in the history so It can keep track as it does multiple calls?

I became so annoyed about that problem that I created a new AI Agent node that fixes it. It correctly saves Tool and Tool Result messages to the conversation history.

Link here: n8n-nodes-better-ai-agent - npm and can be installed as a community node

Also added a “Webhook URL” that you can define to send intermediate steps as they happen, instead of having to wait for the entire output.

4 Likes

that’s awesome! and i see, downloads are growing. I wonder if n8n team can join to collaborate to make one better node together

Just downloaded and will give it a try! Thanks putting in the legwork!

hello. Azure openai dont work with this AI Agent

Same issue here. Needed to save the tool calls (input-output) for follow up checks later in de conversation. Putting it in the chat history is just not pretty or user friendly. And pretty heavy on memory that’s not intented for that purpose. Happy to check the ‘better agent’. Thanks @fjrdomingues

I have the same issue and came across this post. I tried your community node and got this error Invalid prompt: messages must be an array of CoreMessage or UIMessage ¡ Issue #2 ¡ fjrdomingues/n8n-nodes-better-ai-agent ¡ GitHub, is there a way to fix this?

Thanks!

Man, great job. I tried your node, and the issue seems to be caused by the ai-sdk library not supporting the new version of the large model. I saw this kind of prompt in the Docker container logs. So the problem isn’t with the generation of prompt messages, but rather with the model support. Attached is an error message snippet from the logs.
Tool call failed (attempt 1/5): APICallError [AI_APICallError]: models/gemini-pro is not found for API version v1beta, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.

at /home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/@ai-sdk/google/node_modules/@ai-sdk/provider-utils/src/response-handler.ts:59:16

at processTicksAndRejections (node:internal/process/task_queues:105:5)

at postToApi (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/@ai-sdk/google/node_modules/@ai-sdk/provider-utils/src/post-to-api.ts:111:28)

at GoogleGenerativeAILanguageModel.doGenerate (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/@ai-sdk/google/src/google-generative-ai-language-model.ts:239:9)

at fn (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/generate-text/generate-text.ts:321:30)

at /home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/telemetry/record-span.ts:18:22

at _retryWithExponentialBackoff (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/util/retry-with-exponential-backoff.ts:37:12)

at fn (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/generate-text/generate-text.ts:281:32)

at /home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/node_modules/ai/core/telemetry/record-span.ts:18:22

at ExecuteContext.execute (/home/node/.n8n/nodes/node_modules/n8n-nodes-better-ai-agent/dist/BetterAiAgent.node.js:593:34) {

cause: undefined,

url: ‘https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent’,

That’s awesome, works perfectly, and this definitely should be in the main release !