With all respect, I don’t see how you can expect anyone to give a decent answer to this “my car makes a noise, shall I add salt to the soup” type of question
please provide some workflow, context. how many files, processing delays, actual errors, how often etc.
I tried to share the workflow in my original post but the n8n page froze, proposing me to refresh the page.
This time, I can post the workflow, which processes up to ~400 files, per batch of 25 to 40.
Hi, according to me all these API calls including the wait. It depends largely on the number of files and directories
I have checked the API and there might be another approach via trees (you can ask for it to provide a recursive list)
So the benefit of this would be that you can run your loop only over the filenames (in theory:) wherever they are (if you have set recursively
And based on this compile a list of files you want / need and then do the processing step after based on the list (which doesn’t required so many API calls).
Anyway I don’t know if the GitHub node supports trees directly otherwise you need http request.
Even things like will be able to speedup (again with trees)
Interesting alternative, thanks.
I’m novice though and not ready to recreate the almost entire workflow, which took me days to set up. I like learning as I build, but this Git trees alternative will be part of a future iteration when I am more confident.
For now, I’m wondering about the impact of changing the batch size, and which number to set.
The wait node is added already - easy peasy.
Hi, I don’t think I will make a big difference but you can try. If it’s memory related it will help, can you see how many times the loop is performed? (How many API calls are done?
To close this chapter:
I gave up on hitting the github API with a loop. That’s clearly bad practice.
Instead, I git clone the repo, do my file processing, then delete the repo. Fast, easier to maintain.