Using N8N AI Simple Memory Node with queues after 2.4.6+

Describe the problem/error/question

We have N8N Self Hosted using queues, we recently upgrades to version 2.4.8. We are heavily using AI related nodes. After the update we notice that SImple Memory node is gone (not showing in the list of Agent Node when you choose Memory. Reading the docs it seems now Simple Memory Node is not supported when using queues. We built a lot on top of it and I would like to know if you are working on a short-term solution to have Simple Memory back or understand alternatives.

What is the error message (if any)?

Normal workflow errors since Simple Memory is not showing anymore

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

It happens with any workflow that uses AI Agent Node and try to add Simple Memory.

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.4.8
  • Database (default: SQLite): Postgresql
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): K8S, Docker
  • Operating system: Linux, alpine:latest
1 Like

Hi @gnaranjo

This is why its happening:

When n8n runs in queue mode with multiple workers, Simple Memory is no longer suitable for production because it stores the conversation history only in the local in-process memory. In queue mode, each execution can land on a different worker, and n8n can’t guarantee that all calls to Simple Memory will hit the same worker.

The Simple Memory documentation is explicit about this:
ā€œDon’t use this node if running n8n in queue mode. If your n8n instance uses queue mode, this node doesn’t work in an active production workflow. This is because n8n can’t guarantee that every call to Simple Memory will go to the same worker.ā€

So this isn’t treated as a bug but as an architectural limitation in multi-worker environments. Because the memory is in-memory and per worker, conversation context can disappear or become inconsistent between executions. For production or more advanced use cases, the AI memory docs recommend using persistent/shared memory backends such as Redis Chat Memory (optionally with TTL), Postgres Chat Memory, Xata, or Zep, which store state in a shared database/cache so all workers see the same history.

The documented workarounds are: run without queue mode (single process, where Simple Memory is safe again) or migrate to a persistent external memory (Redis/Postgres/Xata/Zep) when using queue mode to ensure consistent state across workers.

Hi @gnaranjo Welcome!

I recommend restarting your n8n instance and using an incognito tab, ā€˜Simple Memory’ exists so just restart your instance this would solve the issue.

Hello gnaranjo,

The best alternative is the Postgres Chat Memory node. Since you already have Postgres configured, this is the intended solution for your setup and will allow you to persist the memory correctly.

If you need more help, just lmk! :smile:

Hello, thanks everyone for the responses. Now, this is the challenge I have with Redis or Postgres:

Let say, I create a Redis credential to use in a team project workflow, then I use AI Agent and Redis Chat Memory, I use the credential, I let it to use chat sessionId, it runs.

Now, I share the credential with someone else, if this person has access to the chat sessionId she would be able to see the whole chat conversation with ā€œRedis Getā€ node.

To prevent an user see other people’s chats memory data, is there a way to prevent this, to make things more secure? I’ll continue reading but if you have any ideas, thanks.

Hi @gnaranjo

this is a very valid concern, and the short answer is: there’s no built-in per-user isolation for Redis/Postgres chat memory in n8n today, so security has to be handled architecturally.

Redis Chat Memory and Postgres Chat Memory use a shared backend, keyed only by the session ID you configure in the node. Any workflow using the same DB, the same credential, and the same session ID can read that memory.

Redis/Postgres credentials in n8n are just database connections, if they’re shared, any workflow that uses them can access the same data.

There’s no documented feature that encrypts or hides memory per user, or enforces row-level/key-level access control inside Redis/Postgres. So if multiple users share the same credential and know (or guess) the session ID, they can read that memory with a Redis/SQL node — this is expected behavior.

In practice, protection is architectural and organizational. Don’t share Redis/Postgres credentials broadly; use separate credentials (and ideally separate DBs/schemas/logical DBs) per isolation level. Use internal, non-trivial session IDs (defense in depth), without exposing them or letting users edit them. Treat workflow + credential permissions as the real access boundary. Use separate memory backends for sensitive cases, with credentials restricted to a limited group.

Reference: