Data loss in aggregation

Hello everyone,
I’m facing an issue in my workflow logic involving data processing constrained by an external API limit. I would appreciate guidance on the standard, robust way to handle this scenario in N8N.
The Setup
Initial Collection: I successfully collect a total of 250 raw data item IDs.
Batch Constraint: The next critical step involves calling an external API to fetch detailed metadata, but this API has a strict limit of 50 item IDs per request.
Splitting: I use a function to split the single item containing the 250 IDs into 5 separate items/batches (each containing 50 IDs).
The Problem
Execution Failure: The subsequent HTTP Request node, which should execute 5 times (once for each batch item), sometimes only executes once, resulting in only 50 detailed items being retrieved instead of 250.
Aggregation Failure: Even when all 250 items are successfully retrieved, I apply filtering (an If node) and then try to use a final Code node ($input.all()) to combine the filtered data and generate a single file (CSV). This final step consistently produces a file with only one row of data, indicating that the 250 items were never successfully aggregated into a single set after the filtering.
The Core Questions
Guaranteed Batch Execution: In N8N, what is the most reliable node (a standard node, not custom code) to use after generating the 5 batches to guarantee that the following HTTP Request node executes exactly 5 separate times? (My current Function node seems unreliable here).
Robust Aggregation: After the HTTP Request processes the 5 batches, the data flows to a filter (If node). What is the standard configuration or node type required immediately after the filter branches (the ‘True’ or ‘False’ output) to collect and merge all filtered items back into a single item list before sending them to a final Code node for CSV generation?
How you would structure this part of the workflow to ensure all 250 items are processed and correctly aggregated for the final output file.<!-- H

hello @Super_Friend

The most standard way is to use an HTTP node with the paging option enabled.

Please, share the workflow.

You can select all nodes with Ctrl+A and copy them with Ctrl+C. Then, past the content after pressing the button </> with Ctrl+V.