Hi everybody! I am a new n8n user and so far I am having a really promising experience. I have a question regarding one task I need to automate.
I have a Python scraper and a S3 bucket where I need to store the data. For each record that I get in the scraper (via requests library) I need to check if that specific id is in S3. If it is just skip the record, otherwise create the folder and save the records in it. How can I solve this task?
This should be easy enough if you hook up prior to your Python script a code node that will generate an array of date strings and then use another SplitInBatches (Loop) node to process each date one by one. You can then pass the current date as an argument or query parameter for your Python scraper.