Manage memory of agent for custom implementation

Hey guys, pretty new to n8n and could really use some help here. I’m trying to build an agent flow that handles group chats in Telegram, but I’ve run into a bit of a roadblock.

So here’s the situation - I’m getting batches of messages from Telegram groups that contain multiple messages from different users, basically many human roles with different content all bundled together. What I want is for the agent to be able to reply either to specific users individually using the Telegram message tool, or send general responses that might address multiple users at the same time, depending on what makes sense for the conversation.

The problem is that I can’t seem to use the built-in memory because of how this works. The memory nodes just take my whole input batch and dump it into the database as one big entry, which obviously isn’t what I want. I need to insert the memory in a more controlled way so each message gets handled properly.

I’m wondering if there’s a way to manipulate the agent’s memory through code nodes so I can have more control over this process? Or maybe there’s a different approach I’m missing entirely? I’d really appreciate any guidance on how to tackle this because I’m feeling a bit stuck.

Thanks in advance for any help!

1 Like

Hey @ben_berizovsky

welcome to community

You can handle this cleanly by first splitting the Telegram batch so each message becomes its own item, then skipping the built-in Memory and keeping your own lightweight memory: one window per user (chat:thread:user) and one shared window for the group/thread (chat:thread:__GROUP__). Before you call the agent, read both windows, build a short context, and ask the model to return strict JSON like a decision (reply_individual or reply_group) plus the reply text. Route accordingly: for individual replies, send a message using reply_to_message_id (and message_thread_id if it’s a forum topic); for group replies, send one consolidated message at the end of the batch. Keep the windows trimmed to the last N messages, and if the model suggests a DM but the user hasn’t opened one, fall back to an in-thread reply (or share a deep link). This keeps things tidy, predictable

Hey thanks for the reply!

I have two different routes for group and direct messages, lets assume we only have a group chat and the agent has to answer in the group chat, he can answer per message if needed separated, or it can just answer to multiple people at once in one message, whatever it chooses but thats not the issue.

The issue is that when I trigger the agent, it must receive an input and the input is basically the whole batch of messages.

When I wrote this agent myself without using n8n, I basically didn’t have a “prompt input”, my input is just the memory that I set before running the LLM.

In the following attachment, I still used the built in memory using mongo, and used the memory node to insert each message of the batch (using the looper and transformer code) but I had to set an empty string input for the agent, so for each message I send in a row, it inserts an empty string and duplicates the responses, like runs the agent multiple times the number of messages I have in the batch.

Expectation: insert all messages, run the agent ONCE, respond ONCE. (in the future i will not have output, and will just have telegram tool so it can use it multiple times to respond and not use output to respond, and this is another question, I need the tool of telegram response to save the response to the memory, probably need a custom telegram message tool that does the logic of insertion to messages db).

I also attached my database to show how it saves the data and the telegram screenshot of input/output.

Am I in the right direction?
Also, the agent receives an input prompt, if I dont use memory, how can I give it the conversation messages (lets say I manually just fetch them from my db), it feels wrong inserting it as input prompt, I am used to work with LLMs with messages list instead of input prompt on the low level?

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.