MongoDB Chat Memory node leaks connections — never closes MongoClient instances

Hi n8n team,

We’re experiencing a critical MongoDB connection leak caused by the MongoDB Chat Memory node (@n8n/n8n-nodes-langchain.memoryMongoDbChat).

The problem

The MemoryMongoDbChat.node.js supplyData() method creates a new MongoClient on every workflow execution with no pool size limit (defaults to 100 connections per client). While a closeFunction is returned, it is not reliably called by the execution engine — particularly when used inside AI agent workflows with
multiple tool calls or when errors occur.

This results in leaked MongoClient instances that maintain heartbeat connections indefinitely, accumulating with every execution. In our case, we reached 355 leaked connections from n8n alone (out of 409 total), nearly exhausting MongoDB’s connection limit. MongoDB was only up for 4 minutes before reaching this
state.

Reproduction

  1. Create a workflow with a webhook trigger → AI Agent → MongoDB Chat Memory node
  2. Send repeated requests to the webhook (we hit ~1,000 executions in 6 hours)
  3. Monitor MongoDB connections: db.serverStatus().connections
  4. Connections grow linearly with executions and never drop

Root cause in code

File: @n8n/n8n-nodes-langchain/dist/nodes/memory/MemoryMongoDbChat/MemoryMongoDbChat.node.js

// Line 97 — no pool limits, no idle timeout
const client = new mongodb_2.MongoClient(connectionString);
await client.connect();

The closeFunction is defined but depends on the execution engine calling it. For high-frequency AI agent workflows, this doesn’t happen consistently.

Suggested fix

At minimum, add pool constraints to the MongoClient constructor:

const client = new mongodb_2.MongoClient(connectionString, {
minPoolSize: 0,
maxPoolSize: 1,
maxIdleTimeMS: 30000
});

Ideally, the node should also use a shared/cached MongoClient per credential rather than creating a new one per execution, similar to how other database nodes handle connection reuse.

Our workaround

We patched the file inside the container with the above options and restarted n8n. Connections dropped from 359 to 8 immediately.

Environment

  • n8n version: latest (Docker, n8nio/n8n:latest)
  • MongoDB: 8.2.6
  • Node type: @n8n/n8n-nodes-langchain.memoryMongoDbChat
  • Trigger frequency: ~3 executions/minute via webhook

Note: The regular MongoDB node (n8n-nodes-base.mongoDb) properly closes connections in a finally block, but also lacks maxPoolSize limits — worth adding as a safeguard there too.

Thanks for looking into this.

Yeah this is a legit issue, your patch with maxPoolSize: 1 and maxIdleTimeMS: 30000 is the right move. The closeFunction approach just doesn’t work reliably when the AI agent is doing multiple tool calls in a single execution since the engine doesnt always clean up properly on each supplyData call. Honestly the node should be using a cached client per credential like most database nodes do, creating a fresh MongoClient every single execution is pretty brutal at any real throughput. Might be worth opening a GitHub issue on the n8n repo if you haven’t already so the team picks it up for the langchain package specifically.

Done created thanks for the suggestion, will close this topic