Iterate on each item

I am trying to create an iteration for each source so that for each source I can:

  1. Increment the counter
  2. do an HTTP call for the source
  3. check if it’s been completed
  4. If counter is over 10, send an email
  5. else repeat
  6. go to the next source

However, it seems that each node is taking action upon each item. Am I thinking this through correctly?

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

n8n version: 1.2.0
Running n8n via n8n cloud

@tuedec16 , do I understand it right that you want to send the email with every 10th HTTP call?

If so, here’s what you can do. The flow if very generic and simply meant to give you idea as your requirements is not quite clear to me. Instead of the value 10 for the counter I used 2 so that you could observe how the flow works (there are 5 sources). The Loop node breaks your sources into batches of 2 (switch to 10 in your case) and email will be sent only once the loop processed the whole batch (aka 2 calls, 10 in your case). If the last batch is less than 2 (10 in your case) then email will not be sent.

Note how out of 5 sources “Send email” node is executed only 2 times (every 2 HTTP calls). IF node is used to ensure that the email sent when the batch has no less than 2 sources, which will engage “Send email” node.

1 Like

Thanks for the reply. The purpose of the counter is to set an upper bound of attempts for each source and if it’s reached, send a mail. I’ve tried to illustrate this a bit better here:

@tuedec16 , to be honest, I cannot see how your workflow can work. There are many issues.

  1. How do you expect the counter to go up beyond 1? I think you will never reach 10.
  2. “Send E-mail Count Exceeded” node does not connect back to the Loop node, which will break the loop and won’t allow processing all the data in “List Sources”. Though, as per item 1 you will never end up sending emails anyway.
  3. You seem to expect the very same item to be processed many times by the Loop. That is not how the Loop node operates. The Loop node allows you to batch many items into smaller packs but not iterate the very same item many times.
  4. “Flag true?” IF node compares a numeric value with a Boolean.

The biggest question from me is what condition increases the count larger than 1? I just do not see it happening. Also, as soon as count increments to 1 the item is also marked as “completed”.

Thanks for the reply. I’m coming from Zapier and still learning the structures on n8n.

  1. The counter increments for each attempt made at crawling a source
  2. I missed that
  3. Is there a different node that provides an iteration on the very same items?
  4. Will fix

The condition that increases the count is each attempt at crawling the source that’s in the loop.

@tuedec16 , I think the “loop” needs to be rebuilt in the fashion below then. I replaced the crawling operation with a “black box” as that needs to be reworked too and it is still not clear to me how the count would get incremented and what condition it is that indicates it has completed.

You can run the below to get a gist of what I propose.

The counter increments for each attempt made at crawling a source

But what determines that enough is enough and no more attempts are needed? Does the API you query provides some indication of that? Perhaps what you need is a pagination of the API output rather then “iteration on each item” (from the title of this topic)?

I still do not follow your crawling operation.

But what determines that enough is enough and no more attempts are needed?

The logic is in the workflow – it does a call to see if “success” is returned.

Is there a way to break from the an item iteration and go to the next item?

Here’s a revised workflow based on the feedback:

Essentially what I’m trying to build is:

- Source 1
  - Has source been crawled ? Is count less than 10 ?
    - False & True: Try to crawl source
      - Increment counter
  - Has source been crawled ? Is count less than 10 ?
    - False & False: Send e-mail source crawl failed
      - Go to next source
  - Has source been crawled ? 
    - True: go to next source
- Source 2
  ...

@tuedec16 , thanks for extra clarity (though, I’m still hesitant). I had to redesign the workflow once more combining both of my previous suggestions.

Also note that all your Set nodes were completely misconfigured and I amended them as well. Hopefully, it becomes clearer what the workflow does. Please, take a look at all those set nodes as each is configured differently to cater for the condition at that point. You still need to make sure the crawling logic you want to build is correct.

I have concerns nonetheless that the loop becomes endless (crawling never gets the completed state) and believe you need to consider such a possibility and implement exit point for such a scenario to switch to the next source. It’s hard for me to tell what those conditions are as I’ve no idea what outcome is to expect from your HTTP Request nodes.

Note that switching to the next source implies the connection back to the “Loop” node, which takes only one item at a time. Reiteration of the same source implies a connection back to the IF node labeled “Source crawled?”.

I introduced “Source” node to make it easier and clearer to comprehend the workflow and iterate over the very same source.

Really appreciate the help. Any idea why I’m getting a no data, execute "Source" node first error?

The counter mechanism is what I’m using as the exit point

@tuedec16 , oh, yes, because it references itself. Please, updated the expression in that node from {{ $('Source').first().json.count + 1 }} to {{ $json.count + 1 }}.

I don’t believe the workflow isn’t working correctly. It doesn’t seem to actually be updating the counter. I tried to add a code node to output the count, but it just runs indefinitely and crashes.

@tuedec16 , indeed, workflow works (your main topic “Iterate on each item” seems to be fulfilled), which is a good sign. The problem is rather with your logic of processing the crawling. Nowhere in that logic I see the condition to increment the count during item iteration. It increments only once during initialization and then it is set to “complete”.

You need to rethink your approach if an item needs to be iterated more then once, which implies the condition when “complete” should not be set yet. That is where the risk of endless loop lurks. You need to make sure that when an item is iterated more than once it should be marked as “complete” at some stage. As I mentioned a few times, your logic has never been clear to me as I’ve no idea what output of your crawling request is and what determines how many times an item has to be iterated and when the iteration should stop.

By the way, your final Code node does not return the correct output and hence it provides no help in fixing your logic. Without that node the workflow works as per the logic you provided inside the looping. That is where you need to rethink when to mark an items as complete. Again, currently the item is processed only once as you marked it as complete straight away after it was processed once.

I am trying to emulate a simple for($i = 0; i <= 10; $i++) loop on each source. I was under the impression the node updating the counter was updating the parameter set initially in List Sources.

Are you saying that each item has to be iterated 10 times and only then set to complete?

I’m saying that each item will be iterated at a minimum of 10 times, but can be exited earlier if a flag complete is true. If it goes above 10 without a complete = true, then it means it’s failed. In code, here’s what I’m trying to accomplish.

$sources = array( 'abc', 'def', 'ghi', 'jkl', 'mno' );

foreach( $sources as $source_id ) {
	$complete = false;

	for( $i = 0; $i <= 10; $i++ ) {
		if( $complete == false ) {
			$result_from_crawl = do_the_crawl();

			if( $result_from_crawl == true ) {
				$complete = true;
			}
		}

		if( $i == 10 && $complete == false ) {
			send_an_email();
		}
	}
}

OK, that is where we are stuck. The item gets marked as complete based on crawling (HTTP Response) output. I cannot emulate that in the demo but you should be able to fix it by modifying the final Set node accordingly. Again, I cannot give you a definite answer as I do not know how the output of the HTTP Request node looks like.

To fix the iteration, the references to “count” and “complete” properties have to be updated with the reference to the HTTP Request note (crawler), not “Source” node.

Do you know what the output of the crawler looks like when the crawling was successful and when it was not? Is the output consistent for each of the items? If yes to both than you need to update “Set source as crawled & go to next list item” node accordingly in the places shown in the screenshot.

Again, I cannot help with specifics unless I know what the output of the crawler node looks like.