Alert dampening problem
Steven North
swn at ocsystems.com
Wed Mar 28 20:00:19 UTC 2012
Charles,
Well, I was really just testing dampening with our RTI Alerts. In high-traffic situations we had alerts firing for Average Response Time > threshold pretty quickly. I was thinking I wanted dampening to give me at most 1 alert every 10 minutes so I wouldn't be overwhelmed, and that's what I thought the Time Period option did. After Jay's pointers I tried 10 occurrences in 10 minutes (with a 1 minute metric collection time). In my test case I set the threshold where it would always be exceeded so this worked for me. I don't think that is an operational solution, just a test solution. Operationally, I can see that using something like 3 occurrences in 10 minutes should fire an alert, but if I get 9 occurrences in those same 10 minutes then I don't want 3 alerts. I was looking for a way to fire an alert on a condition, but not fire more than one of those alerts in an given time period. Maybe there is a tricky way to do that with multiple conditions or disable with a recovery alert that reenables, but that was more effort than I was willing to expend in testing, but I think we will want to give our users some guidance on how to properly dampen alerts to avoid being flooded.
Steve
On Mar 28, 2012, at 11:42 AM, Charles Crouch wrote:
>
> ----- Original Message -----
>> Jay,
>>
>> Thanks, I think this explains the problem I was having, user error.
>> ;-)
>>
>> I reread the dampening section and it all looks obvious now. I tried
>> this out and it does what I want when I set it properly.
>
> Great. So what did you want, and what did you set it to?
>
> Just interested, because as Jay mentioned we don't seem to support your original use case of "1 *every* 10 minutes"
>
>>
>> Thanks for the help.
>>
>> Steve
>>
>> On Mar 28, 2012, at 6:25 AM, Jay Shaughnessy wrote:
>>
>>>
>>>> I tried creating an alert on a platform resource, the free memory
>>>> metric, absolute threshold > 0, with a dampening of 1 every 10
>>>> minutes (see attached images). In the image you can see that it
>>>> fired 3 times in 3 minutes (collection interval of 1 minute).
>>>> This is similar to the situation I have with a different
>>>> resource and metric.
>>>
>>> Unfortunately the images didn't come through for me, but you may be
>>> seeing the expected behavior. It's not actually "1 *every* 10
>>> minutes", it's "once every X=1 times the condition set is true
>>> within a time period of Y=10 minutes".
>>>
>>> That means you get an alert if the condition set matches 1 time
>>> within a 10 minute period. If you report every minute you get a
>>> chance of up to 10 evaluations. So X can validly be set in a
>>> range of [1..10]. Since you set X=1, only 1 evaluation has to
>>> match before you get an alert. So, you get 10 alerts. If you
>>> want to ensure that you only get 1 alert in 10 minutes then you'd
>>> need to set X=10, and the condition set would have to match 10
>>> times before an alert would be sent. But this is equivalent to
>>> the simpler "consecutive times" dampening.
>>>
>>> At least this is how I understand it. I could be wrong.
>>>
>>> As far as I know there is no dampening mechanism for limiting the
>>> number of alerts sent based solely on time. We do have recovery
>>> alert defs, which means you could fire an alert, then have that
>>> alert def disabled until its recovery alert def fires. But again,
>>> I don't think there exists today a way for a recovery alert def to
>>> fire based solely on the disabled-time of its to-be-recovered
>>> alert def.
>>>
>>> _______________________________________________
>>> rhq-devel mailing list
>>> rhq-devel at lists.fedorahosted.org
>>> https://fedorahosted.org/mailman/listinfo/rhq-devel
>>
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel at lists.fedorahosted.org
>> https://fedorahosted.org/mailman/listinfo/rhq-devel
>>
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/rhq-devel
More information about the rhq-devel
mailing list