OneDrive "Copy File" node limiting to 99 items when processing large batches (541 files)

Hello everyone,

I’m facing a strange issue with the Microsoft OneDrive Node (Copy operation).

The Scenario: I have 541 files scattered across multiple folders in my OneDrive. My workflow needs to gather all of them and copy them into a single, newly created folder.

The Workflow:

  1. I use a Postgres node to get the IDs and names of these 541 files.

  2. I use a Loop Over Items node to process them.

  3. Inside the loop, I have a Copy File node followed by a Wait node (set to 10 seconds) to avoid Rate Limiting.

  4. I’ve tried processing in small batches (20 items at a time) and also one by one.

The Problem: Even though the n8n execution log shows that all 541 items were processed successfully (no errors reported), when I check the destination folder, there are exactly 99 files. It seems like the process “caps” at 100 items (99 files + the folder itself).

What I’ve already checked:

  • No duplicate filenames (all files have unique names).

  • No errors in the execution log (all nodes return green).

  • I added a long Wait node (10s) to respect Microsoft’s async copy time.

Has anyone experienced this “99/100 items” limit before? Is this a pagination issue within the node or a specific OneDrive API limitation for async copies?

How can I ensure all 541 files are copied?

My workflow:

Output returned by the last node

The node is returning 202 Accepted with an operation monitor URL (location). It seems the copy process is asynchronous. When I run 541 items, only 99 finish. This indicates a throttling issue or the background operations are being queued/dropped by the Microsoft Graph API due to the high volume of concurrent requests.

Information on your n8n setup

  • n8n version: 1.123.4
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu 22.04

Hi @veggi

This is not an n8n issue.

The OneDrive Copy File node uses Microsoft Graph’s asynchronous copy API.
When it returns 202 Accepted, the copy is only queued, not completed.
Microsoft Graph limits how many async copy operations can run at the same time, and excess operations are silently dropped. That’s why only ~99 files finish, even though n8n shows all items as successful.

Best solution: copy files sequentially and wait for each copy to finish before starting the next one.
After each Copy File call, poll the operation’s Location URL until the status is completed.

Using a simple Wait node alone is not sufficient, because it does not wait for the async copy to complete.

Hi everyone, thanks for the explanation.

Actually, I’ve already tried the sequential strategy (copying one by one) and even implemented a logic to poll the ‘Location’ URL to check the status, but the process is extremely slow and still fails intermittently with large batches (around 433 files). I’ve also tried using the $batch API to reduce requests, but Microsoft’s asynchronous limit seems to be a hard wall for this volume.

Honestly, I’m about to give up on OneDrive for this specific use case. My goal is to periodically search for a list of files via item_id (scattered across different folders), gather them into a new folder created on the fly, and email the link.

Does anyone have suggestions for a low-cost alternative to OneDrive that handles this type of bulk file ‘gathering’ better? Maybe Google Drive, Dropbox, or an S3-compatible storage that plays nicer with n8n and large-scale copy operations?

Hi @veggi

the most reliable low-cost option is S3-compatible storage, These services support server-side copy operations that are synchronous and don’t have that hidden async queue limit. In practice this means you can loop over hundreds of files and copy them one by one without them randomly disappearing.

Google Drive and Dropbox can work better than OneDrive for a few hundred files, but they still have API quotas and rate limits. They are fine if your volume is moderate and you add some delay between requests, but for large periodic batches they will eventually hit limits too.

1 Like

Thanks for the tip! I decided to go with Backblaze and partially refactored my project’s architecture. Everything is flowing much better now, and I’m confident this setup will work perfectly.

Thanks again!