Self hosted n8n takes a lot of memory on my Vultr machine

I’ve self-hosted 2 different n8n instances on 2 of my Vultr machines .

Config of both machines :

Machine 1 : ssh -i vultr-ubuntu-instance

vCPU/s: 2 vCPUs

RAM: 16384.00 MB

Storage: 100 GB NVMe

Bandwidth: 1.96 GB

OS: Ubuntu 24.04 LTS x64

Machine 2 : ssh -i yahoo-vultr-ubuntu-instance

vCPU/s: 2 vCPUs

RAM: 16384.00 MB

Storage: 100 GB NVMe

Bandwidth: 1.46 GB

OS: Ubuntu 24.04 LTS x64

The guide that I followed for self-hosting is this https://www.youtube.com/watch?v=f6J-MM0GVtw&list=PLABkyn8HAQVEcwkqSIa2nsdfMfvKEQbHF&index=3

From the logs below , you guys can see that my n8n deployments are consuming a lot of memory ( >400MB ) , is this normal ? How can we optimize it ? Currently , I see as the memory consumption goes up, my n8n app becomes very slow .

image

image

Honestly that looks like a healthy amount of RAM usage. I get similar usage on a production instance. The slow down could be due to a workflow maybe being too big or expensive on resources depending on the complexity of the workflow and the size of data flowing through there. How many workflows do you have running on that instance?

Usage on my instance using queue mode

I’ve only 1 workflow running . The consumption might be due to the loop , I’m scraping data from the blockchain in a batch of 400 every 2 hours.

From my logs :

/home/linuxuser/.pm2/logs/n8n-error.log last 10000 lines:
10|n8n |
10|n8n | <— Last few GCs —>
10|n8n |
10|n8n | [2298301:0x23bd3000] 7133084 ms: Mark-Compact (reduce) 3739.7 (3861.8) → 3738.2 (3830.1) MB, pooled: 0 MB, 3061.39 / 0.00 ms (average mu = 0.272, current mu = 0.000) last resort; GC in old space requested
10|n8n | [2298301:0x23bd3000] 7136206 ms: Mark-Compact (reduce) 3738.2 (3830.1) → 3738.2 (3829.3) MB, pooled: 0 MB, 3121.56 / 0.00 ms (average mu = 0.152, current mu = 0.000) last resort; GC in old space requested
10|n8n |
10|n8n |
10|n8n | <— JS stacktrace —>
10|n8n |
10|n8n | FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
10|n8n | ----- Native stack trace -----
10|n8n |
10|n8n | 1: 0xe13fde node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 2: 0x11d5070 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 3: 0x11d5347 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 4: 0x13f1c5c v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 5: 0x13c9e6e v8::internal::factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 6: 0x13b84cc v8::internal::FactoryBasev8::internal::Factory::AllocateRawArray(int, v8::internal::AllocationType) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 7: 0x13b8626 v8::internal::FactoryBasev8::internal::Factory::NewFixedArrayWithFiller(v8::internal::Handlev8::internal::Map, int, v8::internal::Handlev8::internal::HeapObject, v8::internal::AllocationType) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 8: 0x16ebcd7 v8::internal::OrderedHashTable<v8::internal::OrderedHashMap, 2>::Allocate(v8::internal::Isolate*, int, v8::internal::AllocationType) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 9: 0x16ebd62 v8::internal::OrderedHashTable<v8::internal::OrderedHashMap, 2>::Rehash(v8::internal::Isolate*, v8::internal::Handlev8::internal::OrderedHashMap, int) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 10: 0x1821276 v8::internal::Runtime_MapGrow(int, unsigned long*, v8::internal::Isolate*) [node /home/linuxuser/.nvm/versions/node/v22.16.0/bin/n8n]
10|n8n | 11: 0x705f5de6c476
10|n8n | (node:2304575) [DEP0060] DeprecationWarning: The util._extend API is deprecated. Please use Object.assign() instead.
10|n8n | (Use node --trace-deprecation ... to show where the warning was created)

My server keeps crashing after this.
What’s the solution ? Ik there is a lot of data flowing in my workflow , maybe that’s why that much memory is getting consumed . Should I increase the limit from 4GB to 8GB ?

where can I know about the execution data optimization in n8n ?

Seems 100% normal memory use. Here I have a heavy workflow pulling 17000 records from a db to put into an excel file on google drive. It raises the mem usage to 2GB before clearing down to 200MB again after a few minutes. You can also limit the memory available to an instance if you look at my n8n-dev instance which is limited to 1GB (PS, Im using Docker here to manage my hosting). However since your host has the ram available, I dont see a need for you to tamper with that.

Worker 1 picked up the workflow execution and used some memory.

After a few minutes once the task is done worker 1 dropped back down to idle ram:

image

As for your server crashing, I am not too sure what could be causing that. You can try to increase Nodes heap size with setting this env var

export NODE_OPTIONS="--max-old-space-size=8192"

If you want, you can also share your workflow in a code block and we can have a look if you’re handling the data correctly

1 Like

“It raises the mem usage to 2GB before clearing down to 200MB again after a few minutes.”

Yes sir !

my workflows were consuming a hell lot of data . optimized those workflows. Everything works fine now. Thanks for this !

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.