Add number of consecutive workflow fails in error data

The current situation :

When an error workflow is triggered, we have no context if this is the first time or if the flow failed consecutively X times.

My use case :

I have a scheduled workflow every 5 minutes to retrieve a value from Google Finance.
Sometimes, Google’s API responds by a 503 error, which only happens during 10 or 15 minutes.
Beside this, I have an error workflow which sends push notifications to my phone with execution details.


I would consider acceptable some errors (like 2 or 3 consecutive errors) which be solved by themselves with time, but I don’t want to spam myself everytime Google’s API is failing.


I would be very grateful if the error data could be enriched by a field consecutiveFails like below

[
	{
		"execution": {
			"id": "231",
			"url": "https://n8n.example.com/execution/231",
			"retryOf": "34",
			"error": {
				"message": "Example Error Message",
				"stack": "Stacktrace"
			},
			"lastNodeExecuted": "Node With Error",
			"mode": "manual",
			"consecutiveFails": "4"
		},
		"workflow": {
			"id": "1",
			"name": "Example Workflow"
		}
	}
]

I think it would be beneficial to add this because :

  • The retry on fail settings on a node is not long enough (5 seconds)
  • I can implement what I describe by creating a persistant storage and upgrading my error workflow, but it’s kinda overengineered for such a simple use case
  • I’m pretty sure this could be useful to a lot of people !

Don't forget to upvote this request. The more votes this Feature Request gets, the higher the priority.

That’s how the Error workflow works :slight_smile:
It triggers every time the initial workflow fails.

You can configure the error workflow to handle that.

We use the schema below for the error workflow (it actually consists of 4 workflows. One for each severity and a sub-workflow with all logic):

  • error happens > save it in the storage (s3, separate folder for each severity, like low, medium) + save the new number of errors in the workflow static data > check every 5 minutes if the static data contains more than 0 errors
  • if the number of errors is more than 0 > run a separate workflow that will prepare a table with flow / last error node / how many times it failed and send an email > if sub-workflow succeded, then set the error number to 0

But if some flow has common errors (like 503) and you don’t want to see them as failures, you can handle the errors in the workflow itself. Each node has the option to return the error as usual payload. Then you will need to build some logic to handle the errors and suppress some of them.