Strange issue preventing binary data mode set to filesystem which leads to „memeory issues“

Describe the problem/error/question

I am running the docker version of n8n. I followed the instructions of fixing memory issues (503, workflow not loading,…), still they are not resolved.

I have a flow that will get a very long json data via http response (file sizes 30MB - 100 MB). The flow is then calling the code node to transform it into a binary file. The binary file is then sent via another http node to an api.

I heavily struggle with memory issues. So I did:

  • Every step from here:
  • I confirmed that my env variables are set correctly (even retrieved via the set node as env to check if they are really set correctly
    • NODE_OPTIONS–max-old-space-size=8000
    • N8N_DEFAULT_BINARY_DATA_MODE=filesystem
    • N8N_PAYLOAD_SIZE_MAX=150
    • N8N_BINARY_DATA_STORAGE_PATH=/home/node/.n8n/binaryData

Still when I check in the docker container:

docker exec -it <container_name_or_id> /bin/sh
du -sh /home/node/.n8n/binaryData

I only get a 12 KB file size, even while I execute the workflow with a long json string binary.
The binaryData directory contains a meta subdirectory

The memory of the VM (4 VPUS & 16 GB RAM) is completely maxed out (100% and peaks of 200%) when I run this execution with a loop of just 8 large jsons.

As this is also an issue with just 1 downloading and processing of 1 file, I don’t think a sub-workflow will really help. I suspect issues with the binary path

What am I missing? What should I check?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.5.1
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default (at least I can not remember changing it and I don’t see any .env I set for it)
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system: Linux / Ubuntu 22.04 LTS

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
  • n8n version: 1.5.1

  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker

  • Database (default: SQLite): default

  • n8n EXECUTIONS_PROCESS setting (default: own, main): default (at least I can not remember changing it and I don’t see any .env I set for it)

  • Running n8n via (Docker, npm, n8n cloud, desktop app):

  • Operating system: Linux / Ubuntu 22.04 LTS

That option may not work if you don’t have an enterprise plan license. But better to ask someone from the n8n team ( @Jon or @bartv fyi)

The code node is very heavy (it basically doubles the data it has to process). Have you checked the option to receive the output as a file directly?

Thanks for the hint!

Wasn’t clear through the documentation, that this is an enterprise feature.

I kind of reduced the load a bit by running waits and implementing a main/sub execution structure, as I have no control over the data I am getting (looong json)

The documentation says:
" By default, n8n uses memory to store binary data. Enterprise users can choose to use an external service instead. Refer to External storage for more information on using external storage for binary data."

As “external” I do understand a third party system, as it is mentioned " s3 to AWS S3." in this part of the documentation.

if this is true, please update in the documentation so it is clear from the beginning.

Using filesystem is supported on community edition so should be fine.

1 Like

Thanks for clarifying Jon!

Any pointers how to debug the issue?

The code node can use a lot of resources so I suspect that is where the issue is. Can you not use one of the existing nodes to create your binary data or maybe an execute command node to write it out to a file?

Sure, I can test another approach via the command node.

My concern is more regarding the empty filesystem during execution of big files. Do you have any debugging idea?

How can I confirm if the filesystem mode is really working?

Hey @nu03in,

You could check for files in the binary data folder if you are saving execution logging or you could try downloading a large file and see if the memory increases.

Hi @Jon

Do you mean that filesystem mode works only with finished executions? And during an execution, all data will be stored in the memory?

@Jon as mentioned in my first message:

Re binary data folder:

Log is set to info

re: memory

Hey @barn4k,

Filesystem works during as well so in theory if you download a lot of files and look at the memory before enabling it and after you should see a difference.

@nu03in with your ‘long json string binary’ is that a file you are downloading from somewhere or are you generating it?

1 Like

@Jon I download it via http node and with the code node convert in a binary, as it is a based64 encoded string in the json.

Hey @nu03in,

So in that case I am not sure if it would actually be saved as binary data as you creating it in the code node.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.