N8n workflow overwhelmedℹ️ - Reco on these 3 optimize options❓

Hey, I’m dealing with:

  • A large number of files,
  • Recursion into subdirectories,
  • Heavy HTTP requests.
  1. My workflow uses a Loop Over Items node: should I changed batch size from 1 to 5?
  2. Should I add a wait node after each batch?
  3. Should I set the HTTP Request node with Retry on Fail = true with maxTries: 3, delay: 1000?

Information on your n8n setup

  • n8n version: 1.83.2
  • Database (default: SQLite): none
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: Intel Core i7, 16GB RAM.

Hi,

With all respect, I don’t see how you can expect anyone to give a decent answer to this “my car makes a noise, shall I add salt to the soup” type of question

please provide some workflow, context. how many files, processing delays, actual errors, how often etc.

reg,
J.

1 Like

I tried to share the workflow in my original post but the n8n page froze, proposing me to refresh the page.
This time, I can post the workflow, which processes up to ~400 files, per batch of 25 to 40.

Hi, according to me all these API calls including the wait. It depends largely on the number of files and directories

I have checked the API and there might be another approach via trees (you can ask for it to provide a recursive list)

So the benefit of this would be that you can run your loop only over the filenames (in theory:) wherever they are (if you have set recursively
And based on this compile a list of files you want / need and then do the processing step after based on the list (which doesn’t required so many API calls).

Anyway I don’t know if the GitHub node supports trees directly otherwise you need http request.

Even things like will be able to speedup (again with trees)

Reg,
J.

1 Like

Interesting alternative, thanks.
I’m novice though and not ready to recreate the almost entire workflow, which took me days to set up. I like learning as I build, but this Git trees alternative will be part of a future iteration when I am more confident.

For now, I’m wondering about the impact of changing the batch size, and which number to set.
The wait node is added already - easy peasy.

Hi, I don’t think I will make a big difference but you can try. If it’s memory related it will help, can you see how many times the loop is performed? (How many API calls are done?

The workflow fetches around 40 subfolders, so I guess 40 calls.

I’ve set the batch at 2 and wait time at 0.5s.

I get this error in the Terminal console every single time:

The session “r3dhtfxh0c” is not registered

Plus:
image

Current workflow state when stuck:

To close this chapter:
I gave up on hitting the github API with a loop. That’s clearly bad practice.
Instead, I git clone the repo, do my file processing, then delete the repo. Fast, easier to maintain.

Keep It Simple.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.