Describe the problem/error/question
Hey folks,
Any time I use the basic LLM call or agent node, I hit a memory leak issue. I’ve updated from 2.13 to 2.14, no luck with it.
I am using a super small Render instance, with 512mb memory and 0.5 CPU : is that too small ?
I don’t see any issuer in the metrics though, at worst I get an 80% RAM consumption (the spikes here is the service being down with zero RAM consumption, not a full RAM)
This seems highly specific to the LLM node. Not 100% reproducible either.
I’ve setup the “N8N_PROXY_HOPS” env variable to 1 in case it was related but it doesn’t seem so.
Can reproduce with Basic LLM Chain, AI Agent, doesn’t depend on the model or complexity of the workflow.
What is the error message (if any)?
<--- Last few GCs --->
Menu
[7:0x74156049a000] 177156 ms: Mark-Compact (reduce) 252.7 (256.9) -> 252.3 (256.2) MB, pooled: 0 MB, 739.09 / 0.00 ms (+ 5.1 ms in 17 steps since start of marking, biggest step 5.0 ms, walltime since start of marking 833 ms) (average mu = 0.543, curren[7:0x74156049a000] 178033 ms: Mark-Compact 253.3 (256.2) -> 253.0 (258.2) MB, pooled: 0 MB, 872.96 / 0.00 ms (average mu = 0.349, current mu = 0.004) allocation failure; scavenge might not succeed
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
- n8n version: 2.14.2
- Database (default: SQLite): Postgre
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app): Self hosted Render (Docker)
- Operating system: Ubuntu
