Upload all files from FTP to Nextcloud

Hello there,

I am new to n8n and looking forward to some guidance. I am probably just not getting it.

I am trying to receive a list of files from an FTP and then upload it to a folder on Nextcloud. I have currently set up a simple workflow, where I just list all the files from the FTP folder, which works fine. I want all those files now to be uploaded to Nextcloud.

In the Nextcloud upload settings I have to set a specific file name - but I do not want to preselect a specific file. I just want it to upload all the files found on the FTP. I can also not find an expression that represents all the file paths from the previous FTP list. I thought I would find that under the Previous Node’s output, but that is not the case.

I am using the latest docker version of n8n running on unraid with default DB. Thanks!

1 Like

Hi @bvelte, n8n keeps all data in memory during the execution of a workflow (for now, we actually rolled out some changes to this with 0.156.0 which aren’t enabled yet by default). Meaning if you’re working with a large amount of data, you will eventually max out the available memory causing your instance to crash.

That said, you technically can upload all files on an FTP path to Nextcloud. You’d just need to download them first from your FTP server rather than just listing them:

Example Workflow

Hi there, thanks a lot for your quick reply and explanation. I copied your example and added my FTP to the first stage. In the Copy-Node it has filled the path with /path/. - executing it gives me an error “Not a regular file”.

Ah, that’s most likely because your FTP server will return more than just files in your workflow whereas my test server only returned actual files when listing the example directory.

You’d need to filter these additional items out to avoid trying to download a directory, for example using the IF node.

@MutedJam yeah, the file list shows me a file named . and one named … for whatever reason. I managed to if that out, resoluting in 2 valid files in the list. But what exactly do I have to put into the path for the FTP download to now download the 2 remaining files?

1 Like

An expression like {{$json["path"]}} should do the trick like in the example workflow I have shared. This would reference the path of each item the FTP node receives:

image

@MutedJam Thanks, that worked like a charm. I was so confused by it being replaced with 1 filename in the overview, I thought it only does 1 file then. Still much to learn.

Anyways, I really appreciate your help!

@MutedJam As I am thinking about it I would have a more complex requirement as well, maybe you can add something here. I would like to only upload files from FTP > Nextcloud that are not already uploaded there. Can I somehow check/compare, which files would be new?

Or the other way around: Can I set up a trigger, that checks for new files on FTP and then only give them to Downloads/Nextcloud?

So a trigger node for new FTP files doesn’t exist yet for n8n I am afraid. You could, however, via the Cron or Interval nodes regularly fetch the file list from both Nextcloud and your FTP and then use the Merge node to filter the items that don’t exist in Nextcloud yet, for example like so:

(Again you would need to filter out any directories like . or .. as before in case your FTP server returns them)

Example Workflow

Thanks a lot, I will give it a try!

Where does the FTP download save the files exactly? It there a way to delete them from the server the n8n docker is hosted?

Hey @bvelte,

The files sit in memory until the workflow has ended, They won’t be written to the local disk until you use the “Write Binary Node”.

Got it, thanks!

1 Like

I implemented that workflow and now I am stuck with the following error in the last Nextcloud upload node:

ERR_FR_MAX_BODY_LENGTH_EXCEEDED

I already increased the RAM for node.js (even though I do not know why it consumes this much). In the current workflow it finds a 18MB video file and then fails at the nextcloud upload with the given MAX_BODY_LENGTH error.

@MutedJam I experience a strange behaviour. Everytime the workflow runs via cron it copies duplicates to the Nextcloud. When I execute the workflow manually it does always stop at the Merge-Node. I do not get why and it pretty much kills the whole point of the workflow.

My workflow does exactly look like your example, except the Start-Node which is the Cron-Node in my case.

Hi @bvelte, as mentioned before n8n isn’t the ideal tool for syncing large amounts of binary data between two sources. A stuck workflow could suggest a memory problem, especially if there are additional error messages.

So you might want to consider a different tool for this job. In the past I’ve suggested rclone for this and I think it might also handle your case quite well. It can connect to both FTP and Nextcloud (via WebDAV) and could be controlled by n8n using the Execute Command (if rclone is available on the n8n server/in the n8n docker container) or SSH nodes.

There’s also the experimental variable N8N_DEFAULT_BINARY_DATA_MODE=filesystem which you can set on the latest n8n versions to prevent n8n from loading binary data into memory and use the filesystem instead. This would, however, not change the behaviour of the merge node. Could you share the data returned by both nodes connecting to your Merge node’s inputs when you workflow stops and confirm which (if any) errors you are seeing?

1 Like

Hi @MutedJam, thanks for your quick reply. I understand that it might not be the best solution, but I am really not handling that much of file size - maybe 20MB in one workflow.

The strange thing also is, that it generally causes no errors, but it keeps adding files to Nextcloud, that are already there. This might lead to bigger file chunks. When I try to reproduce it by executing the workflow manually it always stops at the RemoveKeyMatches node, which is totally correct. In the next cron/intervalled run I see duplicates again. As if it was not fetching the latest file list from nextcloud every time. So no errors, but I will try to get the log output for the next time it happens.

Would N8N_DEFAULT_BINARY_DATA_MODE=filesystem just work with your last workflow example (that is what I’m using) or do I have to change anything in the nodes?

Thanks in advance!