Render hosting + AI agent node or LLM - memory failure

Describe the problem/error/question

Hey folks,

Any time I use the basic LLM call or agent node, I hit a memory leak issue. I’ve updated from 2.13 to 2.14, no luck with it.
I am using a super small Render instance, with 512mb memory and 0.5 CPU : is that too small ?

I don’t see any issuer in the metrics though, at worst I get an 80% RAM consumption (the spikes here is the service being down with zero RAM consumption, not a full RAM)

This seems highly specific to the LLM node. Not 100% reproducible either.
I’ve setup the “N8N_PROXY_HOPS” env variable to 1 in case it was related but it doesn’t seem so.

Can reproduce with Basic LLM Chain, AI Agent, doesn’t depend on the model or complexity of the workflow.

What is the error message (if any)?

<--- Last few GCs --->

Menu

[7:0x74156049a000]   177156 ms: Mark-Compact (reduce) 252.7 (256.9) -> 252.3 (256.2) MB, pooled: 0 MB, 739.09 / 0.00 ms  (+ 5.1 ms in 17 steps since start of marking, biggest step 5.0 ms, walltime since start of marking 833 ms) (average mu = 0.543, curren[7:0x74156049a000]   178033 ms: Mark-Compact 253.3 (256.2) -> 253.0 (258.2) MB, pooled: 0 MB, 872.96 / 0.00 ms  (average mu = 0.349, current mu = 0.004) allocation failure; scavenge might not succeed

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

----- Native stack trace -----

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.14.2
  • Database (default: SQLite): Postgre
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Self hosted Render (Docker)
  • Operating system: Ubuntu

Fixed by upgrading to the 25$/month instance with 2gb RAM, farewell to my money…
Conclusion : if you need a Europe zone, switch directly to your own VPS on OVH or Hetzner, it’s a lot of config but it’s definitely way less expensive.
Related Reddit post : https://www.reddit.com/r/n8n/comments/1nf1hnd/render_network_starter_plan_and_ai_agents_need/

1 Like

Good call upgrading — LLM nodes need significant memory. If you’re staying on budget, consider VPS alternatives like Hetzner Cloud (€5/month for 2GB) or scaling jobs to a separate runner. Either way, 512MB was too tight for any LLM work.

512MB RAM is too small for LLM nodes in n8n. The AI Agent and Basic LLM Chain nodes load the model response into memory and Node.js

heap fills up fast — this is a known issue since 2.13.

Two options:

1. Upgrade Render instance to 1GB RAM minimum — this is the real fix. LLM nodes need headroom. With 512MB you’ll keep hitting this

regardless of n8n version.

2. If you want to stay on free tier — add this env variable to your Render service:

NODE_OPTIONS=–max-old-space-size=400

This caps Node.js heap below the 512MB limit so it fails gracefully instead of crashing the process. Won’t solve the root cause but

reduces crash frequency.

The spikes dropping to 0 RAM in your chart = the process is crashing and restarting, not a memory leak in the traditional sense.

It’s OOM kill.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.