FATAL ERROR while reading Binary file

Describe the issue/error/question

Hi,

i’m unfortunately running into a problem while setting up a Workflow that reads a Binary file from our server and after that i would like to transform the data but it already breaks while reading the file.

The file is 45 MB big and I receive the following error Message via the Logs:

2022-03-09T12:15:33.542601035Z <— Last few GCs —>
2022-03-09T12:15:33.542640877Z
2022-03-09T12:15:33.542665295Z [7:0x564764c81e20] 8186253 ms: Mark-sweep (reduce) 601.5 (605.9) → 601.5 (606.9) MB, 603.8 / 0.1 ms (average mu = 0.013, current mu = 0.006) low memory notification GC in old space requested
2022-03-09T12:15:33.542672072Z [7:0x564764c81e20] 8186869 ms: Mark-sweep (reduce) 601.5 (605.9) → 601.5 (606.9) MB, 616.3 / 0.1 ms (average mu = 0.006, current mu = 0.000) low memory notification GC in old space requested
2022-03-09T12:15:33.542688070Z
2022-03-09T12:15:33.542692547Z
2022-03-09T12:15:33.542696430Z <— JS stacktrace —>
2022-03-09T12:15:33.542700592Z
2022-03-09T12:15:33.542704470Z FATAL ERROR: v8::ArrayBuffer::NewBackingStore Allocation failed - process out of memory

I would assume this tells me, that the Instance is generally running out of memory, which means I need to add more Memory to it, or do you believe the problem is something else?

Is there a way to calculate the needed resources?

Thank you very much for the help and have a great day

Information on your n8n setup

  • 0.165.1
  • Postgres DB
  • N8n runs in Docker

Hey @Benjamin_Exner, this error message would indeed suggest a memory problem.

It’s really hard to predict how much memory exactly is required to process such a file, so my approach would be to simply test this out.

Start a local n8n instance on a machine with plenty of memory using a command like docker run -it --rm --name n8n -p 5678:5678 -v ~/.n8n:/home/node/.n8n n8nio/n8n. Head to localhost:5678 and set up a quick example workflow processing your file. Run docker stats in a separate terminal and keep an eye on the memory consumption during the workflow execution.

You can also add the --memory=2048m option to the docker run command in order to limit your local container’s memory (adjust 2048m as needed, this is described here in more detail).

1 Like

Thank you @MutedJam ! Will test and check.

1 Like

The first test showed, that it is not even the Memory but the CPU which is breaking i will check some adjustments and increase things.

Thank you!