Am 01.02.2013 um 15:04 schrieb Jay Shaughnessy:
I think I like option 3. Utilize the data we already pull. As for scalability, all of
the derived values we generate will likely need to be chunked, so that is work to be done
anyway. So for me this option requires redesign that is needed anyway, avoids the cons of
1 and 2, is not as radically different as option 4, and seems immune to server outages and
the like.
I like 3 in this context too.
BUT: As the current OOB approach is rather limited in the sense that the baseline
computation is suboptimal and in a later step
we / the user want to deploy other algorithms, we need to make sure that this will still
work.
Right now, I don't see limitations though.
For the future I could even see the OOB calculation online, in the sense that we are
generating the data for what is an oob on
each incoming metric anyway. So keep the min/max values for a schedule in memory and
extend them when a value comes in
that is higher /lower -- which is done in alerting. Here we would just additionally
compute the oob-factor and store the new value
back if it was higher than before.
Keeping two doubles per schedule in memory should not be that bad -- and could safe us a
ton of work later against a (remote)
datastore.
Heiko
On 1/24/2013 10:03 PM, John Sanda wrote:
> The current implementation for calculating OOBs involves a complex query that reads
the 1hr metric data table to get data from the last hour. Since the metric data tables,
including the 1 hr table, are being ported to Cassandra, the implementation for
calculating OOBs necessarily has to change. Even with CQL querying in Cassandra is
significantly different than SQL. First and foremost, there are no joins. Stefan and I
have been reviewing some different design options and wanted to solicit feedback.
>
> * Option 1 - Leverage existing indexes we already put in place to get 1hr data
> We already have some indexes in place in the Cassandra design that we could leverage
to get the 1hr data.
>
> pros:
> Does not require any additional schema changes and avoids the overhead of updating
and maintaining an additional index. Minimizes changes to the code base for calculating
OOBs as well calculating baselines and aggregates.
>
> cons:
> Fetching the 1hr data will involve multiple queries to Cassandra. In terms of
performance this is suboptimal and could become a performance issue as the number of
schedules that have 1hr data for the previous hour increases.
>
>
> * Option 2 - Put a new index in place
> We could implement a new index that optimizes querying for 1hr data from the previous
hour.
>
> pros:
> The index will allow us to much more efficiently load all of the data with a single
query. Additional queries would only be necessary for paging the data but with row caching
enabled, after the initial read subsequent reads will come directly from memory making
them very fast. Not as many code changes required to support this as compared to the
latter options.
>
> cons:
> The index would be implemented as custom index which means another column
family/table to maintain. This means that when we insert new data into the 1hr table, we
have to also update the index. The index will take up additional disk space and will
divert CPU cycles away from Cassandra doing other work. The querying will be substantially
faster that option 1, but loses out to options 3 and 4.
>
> option 3 - Altogether avoid querying for 1hr data
> OOBs are calculated when the data purge job runs. Prior to OOBs, aggregates, and
baselines are calculated. As stefan astutely pointed out, we already have the 1hr data in
memory that is needed for calculating the OOBs.
>
> pros:
> Avoids the query/index overhead of options 1 and 2.
>
> cons:
> Will require a good deal of implementation change. The code that currently generates
aggregates basically does it one big batch operation. The same holds with the Cassandra
implementation. We need to have that code return the generated 1hr aggregates so that it
can be made available to MeasurementOOBManagerBean. That is simple enough; however, the
simple approach is not a scalable approach. As the number of schedules increases so does
the number of 1hr aggregates that we are holding onto in memory. A safer solution is to do
it in chunks. First, we generate aggregates for the first N schedules and pass those
results onto MeasurementOOBManagerBean, then repeat for the next N schedules, and so on.
This will involve a fair amount of change and like options 1 and 2, all of the work
(aggregation, baseline, OOBs) is still done serially in a single thread unlike option 4.
>
> option 4: Altogether avoid querying for 1hr data and do calculations concurrently
> The primary difference between this and option 3 is that this one would be
implemented with message passing. Since we currently cannot use CDI due to portal-war that
means JMS. Once portal-war is gone, then it would be worth considering CDI events with EJB
async methods for a more lightweight approach.
>
> pros:
> Avoids the query/index overhead of options 1 and 2. Components will be more granular
and very loosely coupled, making it easier to write unit tests. It is difficult to write
tests for some of the existing metrics code in part because automated tests were not
written along side that code. This approach provides much better throughput than the other
options as well as the existing implementation. To illustrate, suppose the container
maintains a pool of 10 threads to run MDBs. If there is enough work to do during a given
run of the data purge job, we can easily pipeline it to utilize those threads resulting in
a higher level of throughput that will help us keep us with those larger inventories that
produce lots of metrics and provided the impetus to migrate our metrics storage to
Cassandra in the first place. Lastly, the JMS solution should scale very nicely for those
users who run multiple RHQ servers.
>
> cons:
> Of all the options this involves the most implementation change. I am not up to speed
on the pros/cons with JMS in AS 7, but that is something we would definitely have to
consider. I am not sure what if any issues there are with JMS and Arquillian. If JMS
functionality is not well supported with Arquillian then automated integration testing
will be more challenging. If this turns out to be the case, the CDI events + asyn EJB
approach might be more favorable.
>
>
> - John
> _______________________________________________
> rhq-devel mailing list
> rhq-devel(a)lists.fedorahosted.org
>
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
_______________________________________________
rhq-devel mailing list
rhq-devel(a)lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschaeftsführer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera