discussion on measurement baseline calculation & problem metrics

Greg Hinkle ghinkle at redhat.com
Mon Aug 2 22:17:54 UTC 2010


Sure, the best example I can think of is a metric that is, on average, growing each day but growing more at peak times. In this example it is a dynamic metric, not trendsup (cummulative) and therefore is not expected to grow continuously. Our system would show a saw-tooth on the baseline factor and worse the oob would disappear entirely each time the baseline was calculated and may not show up again until late in the recalculation cycle. Giving it a delay period equal to the recalculation cycle would likely cause new incoming data to still be "out of bounds" rather than just barely greater than previous max values.

Since this system is designed to find expected steady-state metrics that are not behaving properly the delay would give you a better chance at being able to see them at any given time. Otherwise, you'd have to look at the OOB system at the right time of day for a given metric's baseline recalculation period.

Another way to see this issue is to do a basic install and inventory a whole server all in one shot (and nothing else). You'll see the list of OOBs (aka Problem Metrics) grow over the baseline recalc period and then get zeroe'd before the list starts growing again. This happens because the recalculation period for any metric starts when it is scheduled... and in this example all schedules would have the same initial time.

Like I said, this doesn't give us the TOD/DOW cycle comparison stuff that would be really nice, but I think it does increase the chance of their being useful data in the OOB system at any given time. Which, I must say, has surprisingly useful data at times. I've found interesting memory impact data when I turned on some event tracking and later saw the GC collections and timings shoot up quite a bit. 

I would make it three configurable values (adding the delay to the other two existing settings) and then set the defaults as mentioned previously. 

-Greg 


On Aug 2, 2010, at 5:36 PM, Joseph Marques wrote:

> Can you, for the rest of the people on the list, outline a couple of use-cases that show why the delay creates more useful data?  Also, are you suggesting that we take away the configurability of the amount of data used and/or the frequency of recalculation...or are you saying that we should publicize 7/24/delay as a best practice?  If we allow it to remain configurable, and if the users want to calc baselines only every 3 days, would it then use X amount of data ending 3 days ago...or are you saying that the 24-hour delay is a static delay regardless of the other parameters?  I think a couple use-cases would go a long way here.
> 
> On 08/02/2010 04:04 PM, Greg Hinkle wrote:
>> 
>> I agree that data is almost always cyclical and most often patterned on time of day and day of week. I just meant that given our current functionality and level of granularity, a basic best practice would be a 7 day cycle calculated every 24 hours with a 24 hour delay. I think that's the best chance to keep useful data in the oob system in most cases... short of building something that can analyze for TOD, DOW and other cycles. A fancy system would be nice, but we can cheaply improve what we have for common cases.
>> 
>> 
>> -Greg
>> 
>> 
>> On Aug 2, 2010, at 3:20 PM, Joseph Marques wrote:
>> 
>>> >From #rhq earlier today...
>>> 
>>> (13:09:28) ghinkle: should our baseline system leave a gap on the running baselines? e.g. calculate a 7 day baseline based on 8 days ago to 1 day ago instead of 7 days to now
>>> (13:10:14) ghinkle: right now, all the oobs will essentially disappear after a baseline calc right?
>>> (13:35:08) mazz: we don't calc ALL baselines in one shot anymore
>>> (13:35:42) mazz: only those resources due to get their baselines to be recalced will get them done - and those OOBs would, I guess, go away for those resources
>>> (13:39:50) ghinkle: right... but if we did a delay we could still see things in the past 24 hours that are different from the historical baseline
>>> (13:40:05) ghinkle: i think the "delay gap" should be equal to recalculation period
>>> (13:40:38) ghinkle: and default it to recalcing every 24 hours for a 7 day period that ends 24 hours ago
>>> 
>>> -----
>>> 
>>> So if users want to calc baselines only every 3 days, would it then use X amount of data ending 3 days ago?
>>> 
>>> Keep in mind, there's a natural delay gap that comes into play here due to the fact that baselines are calculated using compressed data from the _1hr table, which only gets calculated at the top of the hour when the metric compression/purge job runs.
>>> 
>>> The major issues with baselines as I see them are as follows:
>>> 
>>> 1) baselines are a trailing average of data, but the fastest we can calculate them is once a day.  this is far too granular, it would be nice if we could calculate the trailing average over the last hour for every metric in the system, and then use that to compute whether incoming metrics are out-of-bounds / problems
>>> 
>>> 2) we don't distinguish between weekdays and weekends, nor do we distinguish between time of day.  for some services, it might make more sense to only use data from their known "active periods" to calculate the trailing average, which might only be during biz hours or excluding weekends.  so, on top of small trailing windows (1) it would also be nice to have separate blackout periods computed for the individual resources.
>>> 
>>> [I know that Heiko has had some thoughts about using a more sophisticated algorithm for this sliding window calculation.  I'm sure he'll chime in on this thread.]
>>> 
>>> 3) trailing averages don't make sense for all resources.  granted, we only calculate baselines for the DYNAMIC types (metrics that strictly strictly trend up or trend down are not candidates for baseline calculation), but it would be nice to give the user even finer control over this.  it should be possible to craft an interface that allows users to turn off baselines for resources altogether.  if not on the resource-by-resource level, perhaps allowing it to be toggled by individual resource type instead.
>>> 
>>> What do people think of these ideas?  Please respond with comments as well as if you have your own ideas for measurement baselines and/or problem metrics.
>>> 
>>> -joseph
>>> 
>>> For further reading on the topic, I've included what I felt were the most relevant links related to the original topic.  Keep in mind that baselines are a distinct concept (and data structure) from problem metrics (a.k.a out-of-bounds metrics).  However, as you'll see, measurement baseline data is one of the inputs to the algorithm that calculates problem metrics, so they are indeed related.
>>> 
>>> How baseline calculations work today...
>>> 
>>> http://www.rhq-project.org/display/JOPR2/FAQ#FAQ-WhendoBaselinesautocalculate%3F
>>> 
>>> What were some of the problems with the previous OOB / problem metrics subsystem...
>>> 
>>> http://www.rhq-project.org/display/RHQ/Design-BaselineOOBProblem
>>> 
>>> How does the current OOB / problem metrics subsystem function...
>>> 
>>> http://www.rhq-project.org/display/RHQ/Design-New+OOB
>>> _______________________________________________
>>> rhq-devel mailing list
>>> rhq-devel at lists.fedorahosted.org
>>> https://fedorahosted.org/mailman/listinfo/rhq-devel
>> 
>> - Greg
>> 
>> 
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel at lists.fedorahosted.org
>> https://fedorahosted.org/mailman/listinfo/rhq-devel
>>   
> 
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/rhq-devel

- Greg

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://fedorahosted.org/pipermail/rhq-devel/attachments/20100802/15ac94f9/attachment.html 


More information about the rhq-devel mailing list