The purge job does the following in this order,

  1. run aggregation
  2. purge data from a bunch of tables in the relational db
  3. calculate baselines
  4. calculate OOBs

We could consider running aggregation in parallel along side the purging of relational tables.


On Feb 26, 2014, at 2:34 PM, Elias Ross <genman@noderunner.net> wrote:

On Wed, Feb 26, 2014 at 9:55 AM, John Sanda <jsanda@redhat.com> wrote:
Metrics aggregation is kicked off from the DataPurgeJob that runs at the start of every hour. It computes and stores aggregate metrics for the previously completed time slice(s). For instance, if aggregation runs at 10:02, then raw data stored between 09:00 and 10:00 will get rolled up into 1 hour metrics. I will describe scenarios in which missed aggregations can occur followed by possible solutions. Any feedback is welcomed/appreciated.

Missed aggregation scenarios:
...

My current issue is that the purge job does not always complete within
an hour. One problem is the Oracle side can take 15 minutes just to
run its data purge of traits and availability. So I end up seeing
holes in graphs for some metrics. I'm not sure what happens in this
case, as over time it seems like aggregation will just get skipped.

Anyway, just sharing another possible scenario.

Solutions:
* Ignore missed aggregations
We already handle the case of server outages. If we choose to ignore the other scenarios, then we only need to make sure that rows in the metrics_index table get purged. We can accomplish this easily by setting TTLs.

I like this. I would rather data get removed over time than not. In my
case, I'm really not sure how much old data is hanging around due to
the problem above, or how to get rid of it.
_______________________________________________
rhq-devel mailing list
rhq-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel