how do I use a generated list with over 1000 items in every loop without regenerating it everytime?
For Release Notes Purposes I want to get the PRs of a release. Get the PRs with a mentioned Linear-Ticket and then enrich the PR-Data with the data from the ticket. Therefore I need to filter the Linear-Tickets with the ticket no. I got from the PR.
Problem: I do not want to pull over 1000 tickets in every loop.
How do I do that?
A “merge” within the loop does not work. Because both inputs need to be active in each loop.
What you’re running into happens because n8n doesn’t regenerate static data in every loop automatically. The Loop node processes each item one at a time, but if you pull a big list (like all Linear tickets) inside the loop, every iteration will re-fetch it unless you store it outside the loop once and then reuse it.
Simplest pattern to avoid refetching 1000 tickets each time:
Get the full ticket list once before the loop.
Store it in a variable or in-memory node like Set/Code. This makes the list available without refetching it.
Start the Loop Over Items on your PRs.
Inside the loop, use an expression to look up matching ticket info from your saved full list instead of calling the ticket API again.
So instead of:
Loop PRs → Fetch all tickets → filter per PR
Do:
Fetch all tickets once → cache them → Loop PRs → lookup ticket from cached list
Because n8n nodes keep their output unless you explicitly reset them, this lets you avoid pulling the 1000+ tickets again and again.
@Jonas_Wagner You can efficiently match tickets by either using a $jmespath expression within a Set node to filter the cached list, or by running a small Code node that searches through the pre fetched tickets with Array.find(). Both methods reuse the initial fetch, avoiding redundant calls and improving performance.
IMO, this seems to be a workflow architecture and data mapping issues,
I don’t believe you need the loops here, as iteration is handled natively “by design”,
The key is controlling the data flow using Aggregate or Merge nodes.
Unfortunately, I don’t have sample data to build a sample workflow, but looking at the screenshot here is how I would approach it:
You likely need two branches after getting the GitHub PRs:
A branch to keep the original GitHub data.
A branch to extract all Linear Ticket IDs from the PRs in one go (no loop needed).
Then use a Merge to join the results by matching,
The main idea is to map/move/combine/merge the data correctly throughout the workflow, I recommend avoiding loops as much as possible here, as they introduce unnecessary complexity..