Memory leak

Hello.
I have a problem with constantly growing RAM utilization until n8n crush and container reload.
My VM: 2 vCPU, 4Gb RAM.

My workflow does not process heavy data but it frequently gets triggered by http requests on webhook.


What workflow does:

  • get one string from DB by token from http request
  • check if token valid or empty
  • filter fields for http response

I know that node “Code” can consume memory but it should not stack in memory forever after workflow got finished and i’m not sure it that node is to blame. So i’m trying to find root cause and solution with your help, guys.

Here is the log from docker:
Editor is now accessible via:
http://localhost:5678/
(node:7) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.

<— Last few GCs —>

[7:0x7fb0ad1f8650] 18500972 ms: Scavenge 2983.5 (3105.3) → 2976.3 (3105.3) MB, 11.83 / 0.00 ms (average mu = 0.968, current mu = 0.969) task;
[7:0x7fb0ad1f8650] 18501020 ms: Scavenge 2983.6 (3105.3) → 2976.7 (3105.3) MB, 12.26 / 0.00 ms (average mu = 0.968, current mu = 0.969) task;
[7:0x7fb0ad1f8650] 18501047 ms: Scavenge 2983.2 (3105.3) → 2976.9 (3121.3) MB, 13.59 / 0.00 ms (average mu = 0.968, current mu = 0.969) task;

<— JS stacktrace —>

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----

User settings loaded from: /home/node/.n8n/config
Last session crashed
Initializing n8n process
n8n ready on 0.0.0.0, port 5678
Found unfinished executions: 195672, 195673, 195674, 195678, 195680, 195681, 195682, 195684, 195688, 195692, 195693, 195696, 195697, 195700, 195675, 195676, 195677, 195679, 195683, 195685, 195686, 195687, 195689, 195690, 195691, 195694, 195695, 195698, 195699
This could be due to a crash of an active workflow or a restart of n8n.
Currently active workflows:

  • GL | wh (getconfig) (ID: 2g2tF0yfv5ClNnV6)
    [Recovery] Logs available, amended execution

    (node:7) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 editorUiConnected listeners added to [Push]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit
    (Use node --trace-warnings ... to show where the warning was created)
    Marked executions as crashed
    [Recovery] Logs available, amended execution
    Marked executions as crashed

Information on n8n setup

  • n8n version: 1.63.4
  • Database (default: SQLite): posgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: ubuntu 18.04
1 Like

hello @Ivan_Sheev

Welcome to the community! With that pattern is feels something builds up before it breaks and doesn’t seem to be an edge case (e.g. spike event of some sort). Might be worth exploring queue mode: Configuring queue mode | n8n Docs

Are the requests consistent or are you getting bursts from time to time?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.