N8n Performance in Digital Ocean

Describe the problem/error/question

So today i had to reset a boot in my Digital Ocean. Suda reboot. Luckily it was easy enough and no hard reboot. The issue was that it was not quite heavy on calls, and I realised if I had 300 users it would just crash.

I currently have the most basic Digital Ocean droplet.

This one:


I use it to automate a lot, specifically image generation models, then add watermarks, then upload to another API etc… so not too much. But handling 2Mb files I guess.

Here’s my droplet graph when it crashed.



What i did when it crashed:
Launched parallel 30 executions, so really not that much

So the question is, what is the ideal good CPU/Memory SSD i need?
I’m currently on CPU and RAM only, should i add more Disk?

Very newbie about droplet management… I’d like something that’ll just perform and handle if loads of traffic. But then again I dont know what is the right “normal level” to start with and when to upgrade.

Thank you for your help!

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Thanks for this. Here’s the detials

  • n8n version: 1.0.5
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default (own?)
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker Droplet setup in Digital Ocean
  • Operating system: ubuntu

Hey @Vince,

When it crashed did you check the OS logs to see if there was any indications of what the issue is? Typically 1 core and 1GB will get you started but how much you would actually need would depend on what you are doing so the best option would be to monitor and increase as needed.

It may also be worth remembering that you have 1vCPU and 1GB of memory to run the OS, n8n and anything else that may be on the machine, n8n itself is normally memory heavy but if you are using a lot of code nodes that can also require more cpu power.

1 Like

Thanks @Jon that is what I was hoping to do, but then if when the memory is maxed out everything dies :face_holding_back_tears:
It’s a dedicated droplet for n8n only.

Ok so heavy on memory. So does 3GB RAM sound decent? Or then again…

Im using a bit of JS code, but not heavily. Probably 2 nodes in each workflows toclean things up.

Do images processed within n8n get automatically saved in the Disks? if so is there a way to purge them?

Thanks for the help!

1 Like

Hey @Vince,

It is probably worth also mentioning that n8n is only memory heavy because we read things into memory so if you were working with binary data those files will sit in memory. This can be tweaked by setting the N8N_DEFAULT_BINARY_DATA_MODE environment option to filesystem which will help and we have recently made some improvements to nodes that read files which has helped a lot.

With the binary data itself if you are using the filesystem option they will be deleted automatically along with your execution logs so the disk space should be ok, Normally the biggest file for n8n is the sqlite database so if you have not already done it I would recommend setting something like the below to reduce the amount of disk space, This will keep the execution data for 24 hours.

EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=24
DB_SQLITE_VACUUM_ON_STARTUP=true

It may also be worth upgrading to make the most of some of the performance improvements, As for the memory I think you will be ok with 2GB but 3GB will give you room to grow as well.

2 Likes

Oh wow, I didn’t know this and that’s probably why my VPS is 1TB in size :sweat_smile:.

So to confirm, we would just add

N8N_DEFAULT_BINARY_DATA_MODE=filesystem

to our environment variables? Sorry to hijack Vince lol but this is something I also struggle with.

Jon, my current environment variables look like this

export EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true

export EXECUTIONS_DATA_SAVE_ON_ERROR=all

export EXECUTIONS_DATA_SAVE_ON_SUCCESS=all

export N8N_LOG_LEVEL=info

export EXECUTIONS_DATA_PRUNE=true

export EXECUTIONS_DATA_MAX_AGE=24

export EXECUTIONS_PROCESS=main

export NODE_FUNCTION_ALLOW_EXTERNAL=uuid

export NODE_FUNCTION_ALLOW_BUILTIN=request-promise-native

Anything else obvious you think I could do to help on this?

Hey @stuart,

Yeah that would do it, Assuming you are on a recent version of n8n you should be all good but before the filesystem option it would keep the files in memory unless you were writing them to disk. If you have a 1TB disk I would recommend finding out what the largest folders are I would also set the execution data to only save on failure by default that way you will have less log data to worry about.

1 Like

Thanks so much. I am trying to dig into the data to clean it up, I expect I can probably get the VPS size down significantly :muscle:

1 Like

haha Stuart :rocket::surfing_woman:

@Jon thank you for the above, super super useful.

Just confirming it is the
docker-compose.yml file under section environement.

Or
nano .env file.
Under all the configs of the n8n access.

Being docker, i’d assume the docker-compose.yml file?

Thanks for confirming!

Hey @Vince,

In the compose file, You can also drop the basic auth lines as we don’t support that anymore :slight_smile:

Thanks @Jon as always! Now implemented :slight_smile:

I’ll keep it open until @stuart has as well, in case he has questions. It didn’t specifically down my overall graphs.
But I’ll ramp to a 2gb premium cpu to see how that helps and stress-test it.

2 Likes

Thanks Jon, just leaving a follow up incase someone else finds themselves in a similar situation.

I implemented these changes and also changed the EXECUTIONS_DATA_SAVE_ON_SUCCESS to none and it has been a game changer for my setup. My Cloudron system info file size has gone from over 1TB in space (massive!) to now consistently and not rising less than 40GB for the whole VPS which has a bunch of things on it.

Thanks for your help and if anyone else is doing a considerable image manipulation setup with N8n and struggling with disk size / requirements then these are some of the things you would want to use.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.