changes for OOB calculations

John Sanda jsanda at redhat.com
Fri Jan 25 22:32:17 UTC 2013


Given that the data purge job only runs hourly, I think we have a lot of wiggle room performance-wise. We do need to consider the overall execution times for doing all the calculations (aggregations, baselines, OOBs) during a given run of the data purge job. And as Stefan pointed out earlier, with Cassandra you should be doing more writes than reads in your application code. Given that I think we will be well served in general to avoid extra reads where possible.

- John

On Jan 25, 2013, at 5:09 PM, Charles Crouch <ccrouch at redhat.com> wrote:

> Another thing to keep in mind is the impact OOB calculations may have on other parts of the system. No one cares if they run slow or fast (within reason) as long as they complete and don't impact other processing.
> 
> The OOB implementation is detrimental if it would have any negative impact on other areas e.g. the server gets stuck at 100% cpu so can't satisfy UI requests, or the back end data store gets crushed and so can't service requests. I'm not saying any of these things would happen with any of the implementation options listed, I'm just trying to point out that in the grand scheme of things its much better to have a very suboptimal OOB calculation which consumes few resources, than a highly optimized algorithm that grinds the system to halt for 30seconds every time it runs.
> 
> ----- Original Message -----
>> The current implementation for calculating OOBs involves a complex
>> query that reads the 1hr metric data table to get data from the last
>> hour. Since the metric data tables, including the 1 hr table, are
>> being ported to Cassandra, the implementation for calculating OOBs
>> necessarily has to change. Even with CQL querying in Cassandra is
>> significantly different than SQL. First and foremost, there are no
>> joins. Stefan and I have been reviewing some different design
>> options and wanted to solicit feedback.
>> 
>> * Option 1 - Leverage existing indexes we already put in place to get
>> 1hr data
>> We already have some indexes in place in the Cassandra design that we
>> could leverage to get the 1hr data.
>> 
>> pros:
>> Does not require any additional schema changes and avoids the
>> overhead of updating and maintaining an additional index. Minimizes
>> changes to the code base for calculating OOBs as well calculating
>> baselines and aggregates.
>> 
>> cons:
>> Fetching the 1hr data will involve multiple queries to Cassandra. In
>> terms of performance this is suboptimal and could become a
>> performance issue as the number of schedules that have 1hr data for
>> the previous hour increases.
>> 
>> 
>> * Option 2 - Put a new index in place
>> We could implement a new index that optimizes querying for 1hr data
>> from the previous hour.
>> 
>> pros:
>> The index will allow us to much more efficiently load all of the data
>> with a single query. Additional queries would only be necessary for
>> paging the data but with row caching enabled, after the initial read
>> subsequent reads will come directly from memory making them very
>> fast. Not as many code changes required to support this as compared
>> to the latter options.
>> 
>> cons:
>> The index would be implemented as custom index which means another
>> column family/table to maintain. This means that when we insert new
>> data into the 1hr table, we have to also update the index. The index
>> will take up additional disk space and will divert CPU cycles away
>> from Cassandra doing other work. The querying will be substantially
>> faster that option 1, but loses out to options 3 and 4.
>> 
>> option 3 - Altogether avoid querying for 1hr data
>> OOBs are calculated when the data purge job runs. Prior to OOBs,
>> aggregates, and baselines are calculated. As stefan astutely pointed
>> out, we already have the 1hr data in memory that is needed for
>> calculating the OOBs.
>> 
>> pros:
>> Avoids the query/index overhead of options 1 and 2.
>> 
>> cons:
>> Will require a good deal of implementation change. The code that
>> currently generates aggregates basically does it one big batch
>> operation. The same holds with the Cassandra implementation. We need
>> to have that code return the generated 1hr aggregates so that it can
>> be made available to MeasurementOOBManagerBean. That is simple
>> enough; however, the simple approach is not a scalable approach. As
>> the number of schedules increases so does the number of 1hr
>> aggregates that we are holding onto in memory. A safer solution is
>> to do it in chunks. First, we generate aggregates for the first N
>> schedules and pass those results onto MeasurementOOBManagerBean,
>> then repeat for the next N schedules, and so on. This will involve a
>> fair amount of change and like options 1 and 2, all of the work
>> (aggregation, baseline, OOBs) is still done serially in a single
>> thread unlike option 4.
>> 
>> option 4: Altogether avoid querying for 1hr data and do calculations
>> concurrently
>> The primary difference between this and option 3 is that this one
>> would be implemented with message passing. Since we currently cannot
>> use CDI due to portal-war that means JMS. Once portal-war is gone,
>> then it would be worth considering CDI events with EJB async methods
>> for a more lightweight approach.
>> 
>> pros:
>> Avoids the query/index overhead of options 1 and 2. Components will
>> be more granular and very loosely coupled, making it easier to write
>> unit tests. It is difficult to write tests for some of the existing
>> metrics code in part because automated tests were not written along
>> side that code. This approach provides much better throughput than
>> the other options as well as the existing implementation. To
>> illustrate, suppose the container maintains a pool of 10 threads to
>> run MDBs. If there is enough work to do during a given run of the
>> data purge job, we can easily pipeline it to utilize those threads
>> resulting in a higher level of throughput that will help us keep us
>> with those larger inventories that produce lots of metrics and
>> provided the impetus to migrate our metrics storage to Cassandra in
>> the first place. Lastly, the JMS solution should scale very nicely
>> for those users who run multiple RHQ servers.
>> 
>> cons:
>> Of all the options this involves the most implementation change. I am
>> not up to speed on the pros/cons with JMS in AS 7, but that is
>> something we would definitely have to consider. I am not sure what
>> if any issues there are with JMS and Arquillian. If JMS
>> functionality is not well supported with Arquillian then automated
>> integration testing will be more challenging. If this turns out to
>> be the case, the CDI events + asyn EJB approach might be more
>> favorable.
>> 
>> 
>> - John
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel at lists.fedorahosted.org
>> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>> 
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel



More information about the rhq-devel mailing list