discussion on measurement baseline calculation & problem metrics

Joseph Marques jmarques at redhat.com
Mon Aug 2 21:36:20 UTC 2010


Can you, for the rest of the people on the list, outline a couple of 
use-cases that show why the delay creates more useful data?  Also, are 
you suggesting that we take away the configurability of the amount of 
data used and/or the frequency of recalculation...or are you saying that 
we should publicize 7/24/delay as a best practice?  If we allow it to 
remain configurable, and if the users want to calc baselines only every 
3 days, would it then use X amount of data ending 3 days ago...or are 
you saying that the 24-hour delay is a static delay regardless of the 
other parameters?  I think a couple use-cases would go a long way here.

On 08/02/2010 04:04 PM, Greg Hinkle wrote:
> I agree that data is almost always cyclical and most often patterned 
> on time of day and day of week. I just meant that given our current 
> functionality and level of granularity, a basic best practice would be 
> a 7 day cycle calculated every 24 hours with a 24 hour delay. I think 
> that's the best chance to keep useful data in the oob system in most 
> cases... short of building something that can analyze for TOD, DOW and 
> other cycles. A fancy system would be nice, but we can cheaply improve 
> what we have for common cases.
>
>
> -Greg
>
>
> On Aug 2, 2010, at 3:20 PM, Joseph Marques wrote:
>
>> >From #rhq earlier today...
>>
>> (13:09:28) ghinkle: should our baseline system leave a gap on the 
>> running baselines? e.g. calculate a 7 day baseline based on 8 days 
>> ago to 1 day ago instead of 7 days to now
>> (13:10:14) ghinkle: right now, all the oobs will essentially 
>> disappear after a baseline calc right?
>> (13:35:08) mazz: we don't calc ALL baselines in one shot anymore
>> (13:35:42) mazz: only those resources due to get their baselines to 
>> be recalced will get them done - and those OOBs would, I guess, go 
>> away for those resources
>> (13:39:50) ghinkle: right... but if we did a delay we could still see 
>> things in the past 24 hours that are different from the historical 
>> baseline
>> (13:40:05) ghinkle: i think the "delay gap" should be equal to 
>> recalculation period
>> (13:40:38) ghinkle: and default it to recalcing every 24 hours for a 
>> 7 day period that ends 24 hours ago
>>
>> -----
>>
>> So if users want to calc baselines only every 3 days, would it then 
>> use X amount of data ending 3 days ago?
>>
>> Keep in mind, there's a natural delay gap that comes into play here 
>> due to the fact that baselines are calculated using compressed data 
>> from the _1hr table, which only gets calculated at the top of the 
>> hour when the metric compression/purge job runs.
>>
>> The major issues with baselines as I see them are as follows:
>>
>> 1) baselines are a trailing average of data, but the fastest we can 
>> calculate them is once a day.  this is far too granular, it would be 
>> nice if we could calculate the trailing average over the last hour 
>> for every metric in the system, and then use that to compute whether 
>> incoming metrics are out-of-bounds / problems
>>
>> 2) we don't distinguish between weekdays and weekends, nor do we 
>> distinguish between time of day.  for some services, it might make 
>> more sense to only use data from their known "active periods" to 
>> calculate the trailing average, which might only be during biz hours 
>> or excluding weekends.  so, on top of small trailing windows (1) it 
>> would also be nice to have separate blackout periods computed for the 
>> individual resources.
>>
>> [I know that Heiko has had some thoughts about using a more 
>> sophisticated algorithm for this sliding window calculation.  I'm 
>> sure he'll chime in on this thread.]
>>
>> 3) trailing averages don't make sense for all resources.  granted, we 
>> only calculate baselines for the DYNAMIC types (metrics that strictly 
>> strictly trend up or trend down are not candidates for baseline 
>> calculation), but it would be nice to give the user even finer 
>> control over this.  it should be possible to craft an interface that 
>> allows users to turn off baselines for resources altogether.  if not 
>> on the resource-by-resource level, perhaps allowing it to be toggled 
>> by individual resource type instead.
>>
>> What do people think of these ideas?  Please respond with comments as 
>> well as if you have your own ideas for measurement baselines and/or 
>> problem metrics.
>>
>> -joseph
>>
>> For further reading on the topic, I've included what I felt were the 
>> most relevant links related to the original topic.  Keep in mind that 
>> baselines are a distinct concept (and data structure) from problem 
>> metrics (a.k.a out-of-bounds metrics).  However, as you'll see, 
>> measurement baseline data is one of the inputs to the algorithm that 
>> calculates problem metrics, so they are indeed related.
>>
>> How baseline calculations work today...
>>
>> http://www.rhq-project.org/display/JOPR2/FAQ#FAQ-WhendoBaselinesautocalculate%3F
>>
>> What were some of the problems with the previous OOB / problem 
>> metrics subsystem...
>>
>> http://www.rhq-project.org/display/RHQ/Design-BaselineOOBProblem
>>
>> How does the current OOB / problem metrics subsystem function...
>>
>> http://www.rhq-project.org/display/RHQ/Design-New+OOB
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel at lists.fedorahosted.org 
>> <mailto:rhq-devel at lists.fedorahosted.org>
>> https://fedorahosted.org/mailman/listinfo/rhq-devel
>
> - Greg
>
>
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/rhq-devel
>    

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://fedorahosted.org/pipermail/rhq-devel/attachments/20100802/6d066561/attachment-0001.html 


More information about the rhq-devel mailing list