S3 node get all file recursively

Hello n8n Community,

I very happy to use n8n these days, great job done here. It helps me automating most of my tasks.

I’m trying to build an S3 object storage scrapper, in order to find the new files.
As a constraint, I have no control on how the files are pushed ( i mean about the folder hierarchy ) , but still need to retrieve these files.

So in simple words : " I need to get all files no matters the parent folder key"

For exemple , let’s imagin my bucket has 2 objects:

my_bucket : 
        - folder1/folderA/file1.csv
        - file2.pdf

If i do "S3>file>get all" specifying the bucket, i only get file2.pdf
if i do "S3>file>get all" specifying the bucket + folder key option as "folder1/folder2/" , i only get 
file1.csv

I’d like to do “S3>file>get all” specifying the bucket and actually get all files in bucket.

I was thinking of looping using “s3>folder>get all” but isn’t really helping as it doesn’t work ( Not all folders in S3 listed in AWS S3 node folder get all operation? )

Hopefully you guys would have a Tip or trick to manage the get all file recursively in a bucket?

Hi @William_Guerzeder, welcome to the community!

Unfortunately, I don’t have any great advice for you here. I’ve converted your question into a feature request so you (and other users) can vote for having a “recursive” option added to the node.

In the meantime, you could consider using the HTTP Request node instead of the AWS S3 node in order to manually specify the request you’d like to send to S3 and then parse the response accordingly.

Thanks for your answer @MutedJam. Yeah it sounded like a new feature thinking of the easy way to achieve this. I thought some might have found work arounds. I’ll have a look at the options using http request.