Should we convert this thread into an issue?
Hi @mriioos, welcome!
Iâm listening ![]()
I just saw you mention this topic in another category, so Iâll reply here.
Yes, Iâm able to reproduce this,
but IMO this is an incorrect design, especially when working with loop nodes:
Youâve already closed the first loop, so you should link the next node to the done branch, like this:
IMO when working with loops, you need to be fully aware of how they work and how they behave, and design your workflow accordingly..
First of all, thank you very much for the response, but I still have a couple of questions:
About nesting loops:
Doesnât this bloat the memory if the first loop produces large files? How would I process large files if I also require a loop to process them and I canât use it inside the first loop?
I had to do this subworkflow-nesting trick in other workflows when I found this nested-loop thing (I found this subworkflow trick as a workaround solution online).
About subworkflow behaviour:
Shouldnât the behaviour of a subworkflow execution be exactly the same as not using the subworkflow at all? At least in terms of item processing. Similar to how a function would behave in a code flow. Isnât that the point of subworkflows?
About my use case:
I get your propposed solution but I donât see how to apply it to my use case, where I use a default loop (with no loop-over-items to produce the batches):
In this case, I was forced to not use a Loop Over Items node because I needed to feed back the result of âSplit out directoriesâ back into the loop, to have a recursive reading of the directories. If I am not mistaken, this is not possible with the Loop Over Items node.
As a note: my workflow intends to list all directores, recursively, and process all files inside, that is why the result of âSplit out directoriesâ must feed back the results to the loop (where the stop criteria is until no more directories are left).
I tried to explain my self as best as I can, but I might fail to do so in some things as english is not my mother language. If anything is not clear, feel free to ask.
Again, thank you very much.
I have made a second example that ilustrates the isolated issue with Loop Over Items in my use case:
In this example, you may appreciate that Loop Over Items canât be used because it doesnât acccept a feedback loop.
Not really, my goal is to extract all âleaf itemsâ (items that have no subitems) and loop over each of them, one by one, afterwards. Which in this case means I must process the true branch of the if statement.
We can think of this the other way around. How would I extract all leaf items from the pinned data? (with the condition that the workflow must use recursion, as in the original use case, an async recursion chain is the only way to crawl all sub directories).
Anyway, I still donât understand why logic encapsulated inside a sub-workflow doesnât behave the same way as not encapsulating it at all (at least in the logical aspect of it).
hmm, I think at this point you shouldnât use loop nodes,
Itâs not that they canât do it (I still believe they can, it just requires some over engineering/thinking), but the logic is getting a bit complex with all the conditions, feedbacks, nesting, and aggregation..
I think Iâd go for JS or Python code, itâs much easier and more appropriate here..
I rarely use Code nodes, I prefer visual nodes I can see right in front of me, but I think Iâd use one here for sure to manipulate and extract everything I need from the JSON..
Yeah, the only problem is the naturally async behaviour of accessing recursively a remote directory. I am not sure how this could be done. I feel like loop-over-items nodes should not work like this, but I guess that is just an opinion. I surrender, I may just use my workaround and pray haha.
Any way, thanks for the help both to you and @Wouter_Nigrini .



