I have n8n installed on digital ocean using docker compose. I dont have any workflows active at the moment, but it seems like it is consuming so much memory. Does anyone know why and how to reduce this so that when I active a workflow it doesnt overload the memory?
Thanks!
It’s a nodeJS application so you can expect it to consume a decent chunk of memory.
However, define “overload” - I have n8n running with 1 GB RAM on the main node and 512M on a single worker node and I have 25+ workflows some of which are huge and complex - zero issues.
By overload I mean I can run my 1 workflow for 5-10min before the CPU usage reaches 100% and kswapd0 takes over. At that point there is no recovery and I have to stop the docker console and start it again. The above screenshot was the CPU usage without any workflows running.
IIRC a 2 GB digital ocean droplet has around 1 GB usable. That doesn’t leave much for the OS and the app to operate together.
I also use DO and I recommend a 4 GB droplet / 2 vCPU if you want to have a good experience with a dedicated droplet (the specs I mentioned in my first post are based on Kubernetes workload resource limits)
Also I didn’t even ask about your database - i use a DO managed Postgres instance that will reduce your workload by a lot (else run a dedicated droplet for SQL if you don’t want to pay the $15/month for a 1 GB managed DB).
If it’s purely about price you might want to look at Vultr or Linode - though I use DO and have nothing bad to say about them, their service is very good
There is a big difference between CPU and Memory, n8n keeps data in memory while the workflows are running so if you were loading a few thousand items or working with files I would expect there to be some memory usage.
Do you have the output of top with n8n running and maybe an idea of what your workflows are doing? I have ran n8n on a lot less without issue, Do you also have n8n running on that MySQL database you have installed there or is it using SQLite?