Loop Nodes failing (routing to 'Done' branch) when receiving multiple execution waves

n8n Version: 1.121.3 (Cloud) Goal: Retrieve ~2,600 products from Shopify (via 11 paginated HTTP requests), process them in batches, and upload them to Scoro.

Problem Description: I have a pagination loop using an IF node and an Extract Cursor node. This loop correctly triggers the HTTP Request node 11 times. The data then passes through a Split Out node.

The issue arises because the Split Out node releases 11 separate “execution waves” (one for each page of 250 products) instead of waiting for all pages to be collected.

  • The first batch of 250 products enters the downstream Loop Over Meta Data and Loop Over Scoro nodes and processes correctly.

  • However, by the time the second batch of 250 products arrives from the Shopify pagination loop, the downstream Loop nodes have already finished their first cycle and are in a “completed” state.

  • Consequently, every subsequent batch (the remaining ~2,400 products) is immediately routed to the “Done” branch of the downstream Loop nodes without being processed.

What I have tried:

  1. Merge Node (Append): This still passes 11 separate items one by one, triggering the downstream nodes 11 times.

  2. Python Node (Native): Attempted to use _items to aggregate the batches into a single array, but it continues to output 11 separate items (each with an empty or partial list) because it is being triggered 11 times by the incoming waves.

Questions:

  1. How can I “choke” or “gate” the workflow so that the downstream nodes (Filter and Scoro Loops) only start once after all 11 pages of products have been collected?

  2. Is there a specific “Wait for all” logic or node configuration that can consolidate 11 execution waves into a single item containing the full array of 2,635 products?

Hey @PBI_RS !

Have you tried “Reset” option in the 2nd Loop?(with an expression relative to the last node runned before the 2nd Loop).. eg :

{{ $prevNode.name === ‘SetJson’ }}

Try maybe as well a sub-workflow for that part,(this is how n8n recommends?) avoiding loops inline that needs downstream precessing and consumption of resources may be hight.

Cheers!

P.S found this relevant topic from a respected community staff member: