How can I avoid OOM issue

Hello everyone,

Over the past two days, I’ve started experiencing repeated errors in one of my workflows.

Most of the failures are OOM (Out of Memory) errors. Occasionally, I also see a “connection lost” alert during execution.

What’s confusing is that this workflow is not particularly heavy, and it had been running normally before without any issues. The errors started occurring suddenly without any major changes to the workflow.

The failures most frequently occur at the CS_MAIL_CHECK node.

Environment details:

  • Platform: n8n Cloud

  • Workflow type: Trigger-based workflow

  • Main function: Checking incoming emails and generating reply drafts using an LLM

The error messages typically include:

Workflow did not finish, possible out-of-memory issue

  • Sometimes connection lost

I’ve attached the error logs and a screenshot of the workflow for reference.

Has anyone experienced a similar issue recently, especially with n8n Cloud? Any insights or suggestions would be greatly appreciated.

Thank you in advance!

{
  "execution": {
    "id": "1670",
    "url": "https://XXXXXX.app.n8n.cloud/workflow/XXXXXX/executions/1670",
    "mode": "trigger",
    "lastNodeExecuted": "Check CS_MAIL",
    "error": {
      "name": "WorkflowCrashedError",
      "message": "Workflow did not finish, possible out-of-memory issue",
      "timestamp": 1772763142175,
      "context": {}
    }
  },
  "workflow": {
    "id": "XXXXXX",
    "name": "XXXXXX"
  }
}

1 Like

usually an email comes in with a massive hidden payload like a giant base64 inline image or huge html block. memory spikes and the container just crashes.

might be worth tweaking your ‘cleansing mail data’ node to strictly strip all attachments and pass only plain text to the llm.

Hi @Jinwon_Han Welcome!
i see the issue, it is most probably caused by memory limitation of your plan, i recommend seeing this:

As most of the times this can be caused by less memory available.
Also i recommend cleaning some of your flows from executions list and also try splitting some of the load into sub workflows so that it does not come down to a single working running everything alone.

This is almost always caused by one email coming in with a huge base64 image or attachment buried in the HTML, not by anything you changed. Check if your Gmail trigger is set to download attachments and switch the format to simple/plain text if you can, that alone cuts memory usage massively. Also worth splitting the LLM part into a sub-workflow since each one gets its own memory scope on Cloud.

I run a nearly identical workflow — email checking + LLM draft generation — and OOM on n8n Cloud that starts suddenly without workflow changes almost always means one of two things: the emails being processed got bigger (attachments, forwarded chains, HTML bloat), or the LLM is returning larger responses.

The most common fix: truncate your input before hitting the LLM

Add a Code node before your LLM node:

const maxChars = 8000;
return [{ json: { ...items[0].json, body: items[0].json.body?.substring(0, maxChars) } }];

Most emails don’t need more than 8k characters fed to the LLM anyway — the rest is usually disclaimers and forwarded junk.

Second thing to check: the CS_MAIL_CHECK node output

Run it manually on a recent failing email and look at the output size. If you’re pulling full HTML bodies with inline images encoded as base64, that’s your issue right there. n8n Cloud has per-execution memory limits and a single heavy email can crash the whole execution.

Longer-term fix: split into sub-workflows

Trigger → email check → lightweight filtering in one workflow, and then call Execute Workflow for the actual LLM processing. Each sub-execution gets its own memory budget. I restructured mine this way and haven’t had OOM since.

The “started suddenly without changes” pattern usually means either your email volume/size crept up, or n8n updated something server-side. Which node is CS_MAIL_CHECK — is it Gmail/Outlook/IMAP?