Can I create multiagent systems with shared memory using n8n?

Hello, can I use n8n to create multiagent systems where the agents have shared memory?

I have seen conflicting messages about whether or not n8n can be used to build multiagent systems. Thanks!

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey @mpeters What do you mean by shared memory?
If you mean knowledge shared memory here one of many videos on YouTube https://youtu.be/t_azqARQSb8

Yes, you can create multiagent systems with shared memory in n8n using several approaches. Here’s how:

  1. Built-in Memory Options:
  • Window Buffer Memory: Simplest option for storing chat history in the current session(1)
  • Chat Memory Manager: For more complex memory management between agents(2)
  1. External Memory Services:
  • Redis Chat Memory
  • Postgres Chat Memory
  • Motorhead
  • Zep
    (1)

Important considerations:

  1. Memory Management:
  • Multiple memory nodes accessing the same session ID will share the same memory instance(3)
  • Use different session IDs if you need separate memory instances
  1. Resource Limitations:
  • Memory usage varies by plan(4)
  • Consider splitting workflows for better memory management
  • Use external storage for large datasets
  1. Agent Types:
  • Different agent types have different memory capabilities
  • Note that ReAct Agent doesn’t support memory sub-nodes(5)

Thank you so much. That is an incredibly thoughtful response.

I limitation regarding ReAct agents seems like a major drawback. Am I correct in understanding that ReAct agents cannot use memory and context when creating and executing plans?

You’re welcome! I’m glad you found the previous response helpful.

Regarding your question about ReAct agents and memory, you are absolutely correct. ReAct agents in n8n currently do not support memory sub-nodes directly.

This means that out of the box, ReAct agents cannot inherently retain context or use memory when creating and executing plans within n8n’s Langchain integrations. Each interaction with a ReAct agent is essentially treated as a fresh, new request without access to past conversation history or stored context through n8n’s memory management features.

Thanks for this response Daniel. I’m also trying to build a chatbot like system with some reasoning and action capability. I opted for using the Tools AI agent because of the native memory node and ability to recall previous messages.

I’m quite slow at building in n8n (I’m a newbie), so thought it might be worth checking with you before I invest a lot of time trying to build it. Would it be theoretically possible to manually pass conversation histories back to a ReAct agent using the method described in the video above? E.g. if you instructed it in the prompt, whenever it received a chat message, to pass the message to a memory node (perhaps Airtable as in the video), and then use a value returned by the node, which contained the full conversation (or last say K=10 messages), as the new full context, and take decisions from there?

Might be a bit overkill, but I’m just curious as I’d like to experiment with the ReAct agent as a chatbot.

Yes, you could theoretically implement a manual memory system for a ReAct agent using the approach you described! While ReAct agents don’t support native memory sub-nodes, your workaround has merit:

  1. Store conversation history in an external system (like a database, or even just using the Chat Memory Manager node)
  2. When a new message arrives, retrieve the conversation history
  3. Include this history in the prompt sent to the ReAct agent
  4. Have the agent process the full context with each request

Implementation considerations:

  • Prompt Engineering: You’ll need to format the conversation history carefully in your prompt to help the agent understand what’s context vs. new input
  • Token Limitations: As conversations grow, you might need to implement summarization or truncation to stay within model token limits
  • Performance Impact: This approach will use more tokens per request than native memory solutions
  • Manual State Management: You’ll need to handle session IDs and conversation tracking yourself

This approach essentially simulates memory by providing conversation history with each new interaction, which should work for your chatbot experiment with the ReAct agent.

If you’re new to n8n, starting with the Tools Agent might be simpler since it has native memory support, but your proposed solution for ReAct is definitely viable if you want to experiment with its reasoning capabilities!

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.