Best Practices for Handling Large File Transfers Outside of n8n?

Hey everyone,

I’m looking for advice on best practices for moving files outside of n8n when working with large or numerous file operations.

Here’s my situation:

  • I’m using n8n to automate creative workflows (generating and processing ~20+ image files at a time).

  • After generation, I need to download these images and upload them into Google Drive .

  • However, I’ve run into problems with memory overloads , binary data size limits , and slow or fragile file handling inside n8n itself.

To workaround it, I’ve started executing Node.js scripts separately via Execute Command or HTTP Request nodes that move/download files outside of n8n — directly on the server.

What I’m wondering is:

  • What’s the best practice for handling file downloads/uploads at scale outside of n8n but still connected to the automation?

  • Is it common to spin off lightweight external scripts (Node.js, bash, etc.)?

  • Are there better patterns or microservice designs to offload file operations cleanly?

  • How do people handle Google Drive uploads reliably without bogging down n8n memory?

Ideally, I want the flow to stay fully automated, resilient, and low-memory.

Appreciate any wisdom, architecture patterns, or lessons learned here!

Thanks so much. :pray:

Hi,

It’s not a pattern nor best practice advice, but if the end destination is Google drive and no additional processing is needed on n8n, why don’t you create a Google cloud run function which can be called with parameters for it to download the file from a http source and put them directly in Google drive?

An execute CMD still reroute everything

Reg,
J.

Hi,

When dealing with file transfers in n8n, it’s possible to use the Node.js Stream API.

You can try within a code node but if you encounter challenges such as high memory consumption or execution timeouts, a more robust approach might be to handle the streaming logic outside of n8n:

  • Set up a small autonomous Node.js server that handles file transfers using streams.
  • From n8n, you can trigger this server using an HTTP Request node, passing the source URL and destination information as parameters.
  • The Node.js server can then efficiently stream the file from the source (e.g., a URL) directly to the destination (e.g., Google Drive) without loading the entire file into memory.

This method leverages n8n’s automation capabilities while offloading resource-intensive streaming tasks to an external service, promoting better performance and scalability.

Thanks, J. — that’s actually a really smart suggestion.

I hadn’t thought of offloading the whole file handling step to Cloud Run, but that makes a lot of sense, especially since I’m running into memory constraints within n8n when processing large files. If I don’t need to touch the file again within the workflow, routing it through a lightweight API sounds cleaner and more scalable.

Another issue is that I need to do further processing on the file inside n8n so I need a way to pass back the file IDs once I upload to google drive…

Appreciate the insight—might give that route a shot.

Thanks, Anthony—really appreciate the detailed suggestion.

Funny enough, we’re already experimenting with a lightweight Node.js service and using fs, path, and the Google Drive API directly to manage file transfers. Your point about leveraging streams is spot on—we’ve been leaning into the Node.js Stream API to avoid memory bottlenecks and it’s made a noticeable difference.

I still need a clean to return the file IDs from google drive so I can do further management of the files in my n8n workflow.

n8n is great for orchestration, but offloading heavy I/O to a purpose-built service feels like the right architectural balance. Appreciate you validating that direction—it’s a huge help.

Best,

Shawn