Hi, i open this case because in my last workflow created, trying to read a binary file greater than 2GB in size to then upload it to a google drive share, i get the error “File size greater than 2GB”, I read that it is because of a limitation in Node.JS buffer, but is there any upgrade on this?
In another post i saw the env configuration:
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
Could it help in this case? I need to upload files larger than 2GB to Google Drive.
Share the output returned by the last node
File size (15344223093) is greater than 2 GB
Information on your n8n setup
n8n version: latest (queue mode)
Database you’re using (default: SQLite): PostgreSQL
Running n8n with the execution process [own(default), main]:
Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker k8s
Hi @German_Bravo, I am not aware of a way to work around this I am afraid.
Perhaps you want to consider using an external tool not subject to limitations? In the past I’ve suggested rclone for such tasks which you could control through the ssh or Execute Command nodes.
Hi @MutedJam , yes, im already using a python script to upload files to Google Drive that im executing from execute command node to address this problem.
The unique problem with this python script is I have to establish credentials in an alternative way, like json creds file or environment variables mounted by K8S secrets, it would be great if credentials use in expressions could be possible.
Do you know if there is any work in progress to address this limitation?
In python I use Google Drive’s resumable uploads with chunk sizes of 1MB, and it works perfectly, maybe something similar can be achieved in n8n?
That is a good question. I know one of our engineering colleagues has started working a lot on moving to a streaming approach for binary data transfers, not sure this will also touch such limitations. @netroy can you perhaps share some additional insights on this?
Yeah. unfortunately Google drive node is still using node buffers and not using streams.
I’m working on switching to node streams, and also switching to resumable upload api. Those changes should help remove any limits on the file size imposed by n8n in this case.
@German_Bravo The fixes were only for the Google Drive node. The Binary file node is still reading the entire file into the memory first.
I’ll create a another PR to fix that node.
This PR is already in review, and hopefully by mid next week we can have both nodes fixed.
I’ll ping you back once that is done.
@German_Bravo I just pushed an updated image, including the fix for the “Read Binary File” node.
Can you please pull n8nio/n8n:google-drive-performance from docker hub again?
also, please make sure that N8N_DEFAULT_BINARY_DATA_MODE=filesystem is set.
@German_Bravo I just pushed another update that tackles memory issues in error handling, and also switches over to google drive’s resumable-uploads api.
To be able to see the progress on large files, I’ve also added logging after every chunk is uploaded.
Can you please pull the latest image (once it’s ready in 10 minutes) and try again?