Reducing the number or Runs into ONE

Describe the problem/error/question

What is the error message (if any)?

no error message.

Please share your workflow

Share the output returned by the last node

the issue is, that this scenario gets 1 run with x Items as INPUT but produces SEVERAL Runs with fewer Items as OUTPUT. After this part of the Flow. hower it is important to have 1 run, at least because i need to output the data in ONE CSV File.

Information on your n8n setup

  • Running n8n via: CLOUD
  • Operating system: Win11

Hello and welcome to the community!

Please share your workflow using the </> button in this forum.

1 Like

somehow it shows as simple JSON but i will try.
in the meantime here are some screenshots:

  • as you can see theere are 49 Items entering that loop but 264 Items in 6 Batches exiting.
  • i want 1 batch with 49 Itemas (of course) after the re-try function is finished.
    -I know this loop males no sense - it was just a try.

a better solution is that one here:


However this one still produces throttles - but i can adjust using the batch size and the Wait time - so it would be okay.

The best one was this one:


This version just hits the http request with data and filters out the ones that have been throttled and keeps re-trying them until no item is throttled anymore. this would be the fastest as it uses the max before throttling. additionally this solution is universally usable because it just always hits the throttle but retries.

The issue with htis is however, that it also outputs several RUNS with a subset of items in each run. but i really need 1 run with all Items in it.

Maybe this hels clarifying?!
BR
Seb

Here is how you post it on the forum properly:

1 Like

okay. here is the sub-flow

i like this one, because it keeps going until no more items are trottled.
but as mentioned it outputs too many runs. and i need ONE run.

Here is the other solution, but that one keeps producing throttled items.

There can be multiple strategies to solve this.
This depends on both data structure, content (e.g. error messages vs useful payload), what transformation are applied (e.g. where extra items come from) and what exactly you want to have as an output (data structure, valid items requirements, items compounding/grouping strategy).

From what I understand, you may want to strip items that actually reflect errors. But that’s only a hypothesis. If you could share some sample data to illustrate the case and to play around with, it would be nice. I cannot even imagine how to emulate your data.

One of the approaches to provide the sample data is to pin outputs from HTTP Requests and/or Google Sheets (data read nodes) in workflow editor, and then paste the WF with that pinned data.

On posting a workflow with some data pinned see this short guide:

You can also edit pinned data to strip sensitive information.

Also it is always good to have a broader context around the WF fragment in question (i.e. a couple of critical nodes before the fragment and one or two nodes after).

Underlying data is critical where workflow works technically (no failures) and solution depends on logical decisions and data.

For e.g. this request was resolved succesfully thanks to the data and even data store made available: Append or Update Row does not work

And this one got stuck in endless hypothetising: Google sheet: data is replaced in the existing row instead of adding new row
I failed on the latter one being unable to guess what could have possibly gone wrong.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.