“search_string”: ““CEO” OR “Chief Executive Officer” AND (“strategic management” OR “digital transformation” OR “AI management”)”,
Loop receives 7 items as its input, hence it runs 7 times (assuming batch size eq 1) and the Aggregate node gets called 7 times producing an output each time.
Most likely, you will need to do aggregation past loop completion, on its Done branch.
Hard to tell without having a chance to look into the workflow’s nodes.
Could you post it using the </> button?
With your current setup the Merge node receives outputs from the Aggregate, as a batch of 7 items from the Done branch, and 7 batches of 1 items each from the Aggregate immediately. Essentially, same data just organized in different ways. Still, only first item (or rather 1st batch with a single item in it) from Aggregate will be used and remaining 6 will be ignored having no new batches from the Done branch to be paired with.
If i put aggregate on done branch - it will only recevie last item, not output. Question is how to temporary store data that i get from AI , so that after all items are processd i can send them all combined as a single json to api…
I had a look, but it did not help Im useing n8n only for 2 days now…
It seems i do not understand tricky n8n logic
In that partiucular example loop is constructed with if and all works fine.
But in my case loop is different. At the begining i do not know number of items and there is no “terminating” item…
I am trying to infer business logic from the Code nodes but failing.
What is the rule for AI node outputs aggregation?
Why do you need to aggregate the data?
Input: We receive a large JSON payload from an API, which contains multiple resumes.
Batching: We split the JSON into batches of 1 (i.e., process resumes one at a time).
Processing:
For each individual resume:
We extract specific pieces of information.
[Problem]: We need to store the extracted data temporarily for each resume.
Final Step:
Once all resumes have been processed:
We reassemble the extracted data into a bulk JSON structure, similar to the original format.
Then we send it to another API via an HTTP node.
Problem Description:
The issue arises in Step 3, where we need a way to collect the extracted information from each resume, and then send the complete, processed dataset as a single bulk JSON (just like the original input format). The destination API does not accept individual resume data — it only accepts the full processed package.
@Olek Thank you! It works. However i do not understand architecture - HOW output data from node “Code 3” appeared in node “Aggregate”. There is no link (at least visually). Or this is something like global variable? I can not find answer in documentation
All data from each Loop iteration gets accumulated in Loop and once all iterations get completed the entire set gets sent further down the pipeline over the Done branch.
Think of this (Loop) as of .map Array method where you end up with an of the same length as the input where each input items gets somehow transformed by the callback function (nodes within the loop).
Once you have this transformed array ready you can do anything you need with it. So aggregate function does a simple action of { userProvidedPropName: loopOutputArray }