File size (15344223093) is greater than 2 GB

Hi, i open this case because in my last workflow created, trying to read a binary file greater than 2GB in size to then upload it to a google drive share, i get the error “File size greater than 2GB”, I read that it is because of a limitation in Node.JS buffer, but is there any upgrade on this?

In another post i saw the env configuration:
N8N_DEFAULT_BINARY_DATA_MODE=filesystem

Could it help in this case? I need to upload files larger than 2GB to Google Drive.

Share the output returned by the last node

File size (15344223093) is greater than 2 GB

Information on your n8n setup

  • n8n version: latest (queue mode)
  • Database you’re using (default: SQLite): PostgreSQL
  • Running n8n with the execution process [own(default), main]:
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker k8s

Thanks!

Hi, i tried the env variable described below, bust the error still persists:

N8N_DEFAULT_BINARY_DATA_MODE=filesystem

Any help on this case?
Thanks!

Hi @German_Bravo, I am not aware of a way to work around this I am afraid.

Perhaps you want to consider using an external tool not subject to limitations? In the past I’ve suggested rclone for such tasks which you could control through the ssh or Execute Command nodes.

Hi @MutedJam , yes, im already using a python script to upload files to Google Drive that im executing from execute command node to address this problem.
The unique problem with this python script is I have to establish credentials in an alternative way, like json creds file or environment variables mounted by K8S secrets, it would be great if credentials use in expressions could be possible.

Do you know if there is any work in progress to address this limitation?
In python I use Google Drive’s resumable uploads with chunk sizes of 1MB, and it works perfectly, maybe something similar can be achieved in n8n?

Cheers!

That is a good question. I know one of our engineering colleagues has started working a lot on moving to a streaming approach for binary data transfers, not sure this will also touch such limitations. @netroy can you perhaps share some additional insights on this?

Yeah. unfortunately Google drive node is still using node buffers and not using streams.
I’m working on switching to node streams, and also switching to resumable upload api. Those changes should help remove any limits on the file size imposed by n8n in this case.

3 Likes

Hi @netroy, great, im gonna stay tuned for that improvement!

Thanks to both for the replies.
Greetings!

1 Like

@German_Bravo
I’ve created a working branch that I’ve tested with N8N_DEFAULT_BINARY_DATA_MODE=filesystem.

Here is the draft pull-request.

And here is a custom docker image (n8nio/n8n:google-drive-streams), if you want to help test it.

2 Likes

Hello @netroy, I just created a container with this custom image you provided, so started a test n8n instance.

How do I use the streaming update to google drive? As “Read Binary File” keeps throwing the 2GiB file size error.

Thanks a lot.

@German_Bravo The fixes were only for the Google Drive node. The Binary file node is still reading the entire file into the memory first.
I’ll create a another PR to fix that node.
This PR is already in review, and hopefully by mid next week we can have both nodes fixed.
I’ll ping you back once that is done.

Great news @netroy

Going to the google-drive-streaming node test, i just modified the workflow:

The only difference is in the files’ sizes, i just did a zipsplit to have 4 files with max 2GiB size per file:


But now google drive node is throwing error as unknown.
In the docker logs i dont see any error log.

Thanks!

@German_Bravo I just pushed an updated image, including the fix for the “Read Binary File” node.
Can you please pull n8nio/n8n:google-drive-performance from docker hub again?

also, please make sure that N8N_DEFAULT_BINARY_DATA_MODE=filesystem is set.

1 Like

Excellent @netroy I tried using the splitted file (4 files of 2GBmax) and im getting same error at Google Drive node:

No error logs in docker logs.

However, the read binary file node improvement worked:

It is now capable of reading files bigger than 2GiB.

Great job!

Now, only the Google Drive node keeps failing.

I add that tried to upload a unique file o 7GB, and Google Drive node failed again with same error:

No docker logs errors.

@German_Bravo I just pushed another update that tackles memory issues in error handling, and also switches over to google drive’s resumable-uploads api.
To be able to see the progress on large files, I’ve also added logging after every chunk is uploaded.

Can you please pull the latest image (once it’s ready in 10 minutes) and try again?

1 Like

Excellent @netroy i just tested it uploading a file of almost 3GB to Google Drive and it worked successfully:

Amazing work!

Note: yesterday I created a new question for a similar problem with AWS S3 node (upload feature), maybe same problem? Help, please:

2 Likes

Note in the docker logs the upload status is shown correctly:

Hi @netroy, first of all, happy new year!
To take into account, when will this improvements be available as a new n8n version?

Thanks!

@German_Bravo The PR was merged, and will be included in the next release.

2 Likes