Batch Slack Error Messages

Hi,

I recently stumble over a problem and I’m not quite sure how to solve it so here is what we do.

Each Workflow that runs into an error, triggers an Error Workflow. This Workflow is currently pretty simple as it has only the error Trigger and a Slack Post Node inside.

Now the problem is if a Workflow gets executed 900 times in 5 minutes as the API Endpoint is down (happened today), I get 900 messages in the slack channel and could hit the rate limit of slack pretty fast.

My question is now, do you have any smart idea of batching this?

What I came up with is, implementing/setting global variables, add an if node, and check if the workflow id from run 1 is the same as the id from run 2. If yes, no post on slack, if no, post slack.

This, of course, solves the issue of spamming slack but it also means, that I lose how often the error occurred + it is not a perfectly well-thought solution as it has some flaws in it.

I also thought about a wait and found that it is currently in development/close to release but I’m not sure if this would solve the issue so I’m looking forward to all of the answers and thank you very much for your time reading this :-).

BR and have a great Evening
Benji

Welcome to the community @Benjamin_Exner

Now the problem is if a Workflow gets executed 900 times in 5 minutes as the API Endpoint is down (happened today), I get 900 messages in the slack channel and could hit the rate limit of slack pretty fast.

If you did not hit the slack rate limit would that solve your problem?

Hi @RicardoE105 ,

thank you :slight_smile: .

Yeah not hitting the rate limit is, of course, a goal but I’m also kind of annoyed by the fact that my workflow was sending 900 times the same message.

I even thought about adding the data to a sheet and group them that way but I’m not sure if that is possible or not. In case of a solution, I would just create two separate error trigger flows and divide them into single run workflows and these mass messaging ones.

To avoid the rate limit, you can use the split batches node and a small wait using the function node. Check the links below.

Another option is to use the HTTP node with Batch Interval and Batch Size.

About avoiding all the repeated messages. Right off the top of my head, and without knowing all the details, I would say you have to keep the state as you mentioned in something like Airtable. Before saving the new errors, check if the error was already delivered (in the last x minutes/seconds), else send it.

1 Like

@RicardoE105 isn’t the issue batching data across different executions of the error workflow? Is that possible?

An alternate approach could be to store how many times the error workflow has executed in the last, say 24 hours and stop sending Slack notifications once a threshold has been reached.

1 Like

I think what you need is some kind of cache system in the workflow so maybe store the ID and failed attempts somewhere and maybe trigger the alert on the first failure and then say every 10 or 20 and include the count of errors.

You could also maybe update an internal status page with the error or database and in your workflow do a bit of checking at the start of the process (where possible) to see if everything is online first before running.

or something crazy like that anyway, An error workflow is next on my list of things to have a proper think about.

1 Like

Thank you all for the different Approaches and ideas. I guess as a first start I will go with either airtable or storing the executions of the workflow and posting that to slack and not each error.

As a later topic, I will also be thinking about caching and proper error flows as mentioned by @Jon .

Have a great Evening :slight_smile:

1 Like