I am running the docker version of n8n. I followed the instructions of fixing memory issues (503, workflow not loading,…), still they are not resolved.
I have a flow that will get a very long json data via http response (file sizes 30MB - 100 MB). The flow is then calling the code node to transform it into a binary file. The binary file is then sent via another http node to an api.
docker exec -it <container_name_or_id> /bin/sh
du -sh /home/node/.n8n/binaryData
I only get a 12 KB file size, even while I execute the workflow with a long json string binary.
The binaryData directory contains a meta subdirectory
The memory of the VM (4 VPUS & 16 GB RAM) is completely maxed out (100% and peaks of 200%) when I run this execution with a loop of just 8 large jsons.
As this is also an issue with just 1 downloading and processing of 1 file, I don’t think a sub-workflow will really help. I suspect issues with the binary path
What am I missing? What should I check?
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
n8n version: 1.5.1
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
Database (default: SQLite): default
n8n EXECUTIONS_PROCESS setting (default: own, main): default (at least I can not remember changing it and I don’t see any .env I set for it)
Running n8n via (Docker, npm, n8n cloud, desktop app):
Wasn’t clear through the documentation, that this is an enterprise feature.
I kind of reduced the load a bit by running waits and implementing a main/sub execution structure, as I have no control over the data I am getting (looong json)
The documentation says:
" By default, n8n uses memory to store binary data. Enterprise users can choose to use an external service instead. Refer to External storage for more information on using external storage for binary data."
As “external” I do understand a third party system, as it is mentioned " s3 to AWS S3." in this part of the documentation.
if this is true, please update in the documentation so it is clear from the beginning.
The code node can use a lot of resources so I suspect that is where the issue is. Can you not use one of the existing nodes to create your binary data or maybe an execute command node to write it out to a file?
You could check for files in the binary data folder if you are saving execution logging or you could try downloading a large file and see if the memory increases.
Filesystem works during as well so in theory if you download a lot of files and look at the memory before enabling it and after you should see a difference.
@nu03in with your ‘long json string binary’ is that a file you are downloading from somewhere or are you generating it?