missed metrics aggregations

Larry O'Leary loleary at redhat.com
Thu Feb 27 05:37:55 UTC 2014


On Wed, 2014-02-26 at 12:55 -0500, John Sanda wrote:
> Solutions:
> * Ignore missed aggregations
> We already handle the case of server outages. If we choose to ignore
> the other scenarios, then we only need to make sure that rows in the
> metrics_index table get purged. We can accomplish this easily by
> setting TTLs.

Not sure I like the idea of missing/incomplete data. Especially if it
can result in incorrect aggregates. 

> * Retry missed/failed aggregations
> There are a couple different ways we could go about doing this. I will
> save the details for a separate discussion as it can rather involved.
> Suffice it to say, we can implement functionality to handle the
> scenarios of late measurement reports and failed runs. This would
> obviously be more complex that ignoring missed/failed aggregations but
> arguably more robust.

This seems like the ideal solution. I am not sure why aggregation has to
take a one hour chunk. Ideally we expect it to run every hour but in the
event it runs late or the server was down why can't we just figure out
what data still needs to be aggregated and start there? Aggregation
could happen in one hour chunks starting with the oldest hour.


-- 
Larry O'Leary
https://plus.google.com/+LarryOLeary




More information about the rhq-devel mailing list