I am currently a cloud user but I am looking to start self-hosting as I am reaching the limits. Plus I have another PC sitting around that is not being used (Intel i9 9900k, 16gb of 3200mhz RAM)
My question is.
Is this sufficient? I have 4 workflows with an average of 10 steps each running almost every hour throughout the day that move enough data that I have been running out of memory on my cloud account. If i was able to I could tripple the number of workflows within a few weeks.
Assuming each of these steps are not very computationally intensive, this setup is more than enough.
I know people who are running n8n on a VPS with barely 1 shared CPU / 1 GB RAM no problemo. From my experience such setup doesn’t have enough buffer in case of any peak loads.
Now, back to the out-of-memory issues. There are 2 tricks to ovecome this. First, check out this section of the docs on the environment variables.
You can use N8N_DEFAULT_BINARY_DATA_MODE = filesystem To reduce RAM usage.
As an advanced trick, you can use Execute command node and initiate some resource-heavy activities on a system level. So in case you are downloading huge files or moving them around, it is possible to run them via system commands. So you won’t even need to create n8n binary data if this is not needed.
That will do the job, My first n8n install was on an old Pi and that ran for about 6 months before I moved it to a cheap intel nuc where it has been running happily for over a year now, I have 31 active workflows out of the 398 in my database from looking at various support issues and testing.