I’m working with an AI agent node in n8n that uses a Postgres database for its memory. The agent can use tools like a “contact buyer” node, which sends messages to users (e.g., buyers) during the workflow.
Issue:
When the agent sends a message using the “contact buyer” node, this action (the “intermediate step”) isn’t recorded in the Postgres memory. As a result, the agent can’t recall which messages it previously sent.
Question:
Is there a recommended way to save these intermediate tool uses (e.g., messages sent to users) into the agent’s memory (Postgres database), so the agent can reference them later in the workflow?
Has anyone implemented something similar or can suggest best practices for tracking tool usage in agent memory within n8n?
Hi @Veniamin_Veselovsky you can use langsmith to track the AI Process behind n8n.
Navigate to this link to learn more about integrating n8n with langsmith, you will get the step-by-step instructions of how to implement langsmith to your n8n.
Here is the workflow. Basically, what happens is that the model uses the think tool and the contact user tool. Yet, in the end none of the tool calls are listed in the memory. (see image below, where the prompt was: “what’s the meaning of life, use the think tool and then send the user a message”)
Thanks. But even in Nate’s video the tool calls are not being saved in Supabase. In fact, the tool calls key is empty (9:40) when he used the supabase tool.
I am also experiencing this problem. It seems like a major design flaw within n8n, because it makes a lot of AI agent flows impossible to create within n8n. Flows that are definitely possible when using gemini, openai or anthropic directly appear to be impossible to create within n8n itself because of this flaw in the memory storage.
I want to further add that the technique reccomended above, which is to manually insert the tool response into the chat memory, works quite poorly. That is because there is no way to insert a tool response using the “tool” role - I have to insert it as either an AI message or a System message, both of which cause major differences in behavior versus how the LLM providers intend for tool results to be stored, which is under their own role as unique entries in the chat history.