How to Save Multiple Files in Bulk to External Storage in n8n

Hello everyone,

Does anyone know how I can save a list of files to external storage in n8n in bulk? In my workflow, I have 1500 files to save, and currently, I’m saving them one by one, which is taking over 5 hours. :sweat_smile:

Has anyone worked with FTP in n8n or found a solution to take a list of image URLs and save them all at once? I’m using Bunny.net for storage.

Thanks in advance for any help!

Information on your n8n setup

  • n8n version: 1.37.3
  • **Database (default: SQLite):cloud
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):cloud
  • Running n8n via (n8n cloud): cloud
  • **Operating system:Mac OS - 14.2.1

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

hello @Eduardo_Moscatelli

According to the Bunny API, you can’t upload multiple files at once.

However, you can upload them in parallel, but you will eventually get the 429 error (throttling) because of their API limitations. You have to implement a retry behavior for that error.

The general logic would be to use a sub-workflow to upload files and in the parent workflow disable the Execute Workflow node’s option “Wait for sub-workflow completion”

1 Like

@barn4k Thanks a lot for the feedback. Do you think there is a way for n8n to FTP everything to Bunny at once? How does n8n handle FTP and is there no memory crash for large amounts of files?

There is no way to upload all at once, as the Bunny doesn’t support that, it’s not related to the n8n.

About the memory - use a sub-workflow. After each execution of the sub-wf will be completed, the memory for it gets released.

However, you shouldn’t run all 1500 files in parallel, the memory will be overloaded very fast. Instead, pass item as batches with the Loop node and create a wait node to wait sometime between loop iterations.

Your main WF should contain only the Loop node, and the nodes needed to get the URLs (not the URL content itself) in order to pass the url to the sub WF.

the SUB wf should download the url content and pass it to Bunny API, then exit.

So the main WF will look like

and the sub WF will looks like this

That’s however not optimal as the next iteration will be executed regardless of the throttling.

The best approach is to use a message broker (like RabbitMQ) for storing the urls, then you may configure the RabbitMQ trigger to run for no more than X items at once, thus at a time there will be no more then X executions

@barn4k
Thank you for the explanations. In your suggested workflow, wouldn’t it end up being the same? It would still send files to Bunny one by one, and considering I have 1500 files, it would take around 5 hours to save everything.

My idea was to take these 1500 files, split them into batches of about 50 files, and create “branches” that would run “replicated sub-workflows” dedicated to saving each batch file by file. This way, I would have 30 sub-flows, each saving around 50 files (even if they save one at a time).

The BIG DIFFERENCE here is that I would have 30 sub-workflows running in parallel! Do you see any downside to this approach, aside from managing Bunny’s API rate limits? In terms of n8n, would this approach create significant performance issues?

I need a solution to save as many files as possible to Bunny as quickly as possible. Bunny has an FTP upload method using FileZilla (https://support.bunny.net/hc/en-us/articles/115003780169-How-to-upload-files-to-your-Bunny-Storage-zone), but I’m not sure how I could implement this via n8n.

Thanks in advance for your time!

No, they won’t be the same. With the current settings the workflow will send 10 urls for every 30 seconds to be uploaded to the Bunny API.

As for the FTP, you can try the FTP node

2 Likes