Is my memory usage of n8n unusual?


I just created a container with n8nio/n8n image, but it consumed 100MiB of my memory.
Is my memory usage of n8n unusual?

CONTAINER ID        NAME                  CPU %               MEM USAGE 
a3c0adaf28b3        task.server           0.00%               109.2MiB

Welcome to the community @kingcc!

No looks fine to me. For me, it even uses slightly more. So all good.

Hi jan,

Thank you for your hard work and reply me!

Do you have any plans to reduce it? Because I think 100MiB is still a bit much for a Nodejs App on standby.( Or I am wrong…)

Many thanks

Yes, it is always planned to improve that but to be honest, is not high on our agenda right now, and also not sure if huge improvements are currently even possible.

What is planned is making it easier to load external nodes in the future. But until that is in place the memory usage will probably grow more as for some new integrations additional modules are needed and n8n will always need much more memory than if you would write a custom Node.js application. There is sadly always an overhead running a tool like n8n.


What is planned is making it easier to load external nodes in the future

Great! This will be really helpful to avoid upgrading n8n instance when adding new nodes :slight_smile:

Yes, but in many cases, a restart would still be needed if they require additional npm modules.

1 Like

I think the regular startup - standby memory usage of n8n is fine.

However, memory usage grows a lot during processing of the messages.

I have workflows that download large jsons (4.5MB).
It goes through 6-8 nodes.
While processing memory consumption goes up to 2.8GB

I think two effects come into play.

  1. Json-parse memory usage increase non-liniear
  2. Every node in the workflow keeps it’s own copy loaded
  1. Quick search about ‘node json memory’ (or similar things) will show few experiments that node seems to use much more memory for single large documents than chopping the same document into smaller parts.

  2. This is perhaps a quick fix, that after a node has passed the message to the next node, it can clear it’s own data.

PS: My workaround was to split the workflow into two workflows.
Workflow B is started after Workflow A has finished. I kept running into dying n8n nodes.

That is not totally correct. By default do only references get passed around if possible. So it does not copy the data for every node. For that reason is it so important that nodes copy the data themself if they change it, because if they do not do it, it messes up the workflow.

What actually happens depends on the node and the set parameters if data gets copied or not.
One node which always copies all data is for example the Function-Node. The reason is that n8n does not know what is happening in there and to be sure it copies all the data.
The Set-Node on the other hand does copy the data depending on the setting. If it is set to “Keep Only Set” it does not copy the data (as it creates a totally new item), if it is not set it copies it.

1 Like