Not sure about Drools/CEP. Your assumption about CDI events is correct. It would all be
in-process. Infinispan might be a good alternative in that we kind of get the best of both
worlds. It is easily distributed like JMS, and it is a more light weight, easy-to-test
outside the container like CDI.
On Jan 25, 2013, at 9:59 AM, Alan Santos <asantos(a)redhat.com> wrote:
John,
imo 3 or 4 looks like the better medium - long term solution. If option 1 or 2 prove to
be a performance bottleneck then the fix requires the added complexity to refactor
existing database as well as the code change.
If the OOB calculation potentially changes in the future, e.g. with the use of
drools/cep, do any of these choices affect that in a positive or negative manner?
also, fwiw JMS seems like a heavy weight solution. I'm not familiar with CDI events,
but I assume they are not propagated across servers. You could also consider an
infinispan cache and its event model.
-alan
On Jan 25, 2013, at 8:16 AM, John Sanda <jsanda(a)redhat.com> wrote:
> Regardless of what we wind up doing we are trying to keep the algorithms the same. In
the current implementation all of the work is done serially as it would continue to be
with options 1, 2, and 3. In terms of scalability, options 3 and 4 are probably the
winners as they completely avoid any additional reads/writes. The tradeoff with them is
that they likely involve the most implementation changes.
>
>
> - John
>
> On Jan 25, 2013, at 3:39 AM, Thomas Segismont <tsegismo(a)redhat.com> wrote:
>
>> Hi John,
>>
>> Option 2 sounds like best risk/performance compromise:
>> * it allows to keep current computation algorithm
>> * it's not much more work than option 1
>> * I'm not sure the performance penalty of feeding another column family will
be huge
>> * option 3 is riskier and will probably not scale
>>
>> Option 4 is interesting but was there no reason for OOB calculation job being
decoupled from the data purge job in the first place?
>>
>> Cheers
>> Thomas
>>
>> Le 25/01/2013 04:03, John Sanda a écrit :
>>> The current implementation for calculating OOBs involves a complex query that
reads the 1hr metric data table to get data from the last hour. Since the metric data
tables, including the 1 hr table, are being ported to Cassandra, the implementation for
calculating OOBs necessarily has to change. Even with CQL querying in Cassandra is
significantly different than SQL. First and foremost, there are no joins. Stefan and I
have been reviewing some different design options and wanted to solicit feedback.
>>>
>>> * Option 1 - Leverage existing indexes we already put in place to get 1hr
data
>>> We already have some indexes in place in the Cassandra design that we could
leverage to get the 1hr data.
>>>
>>> pros:
>>> Does not require any additional schema changes and avoids the overhead of
updating and maintaining an additional index. Minimizes changes to the code base for
calculating OOBs as well calculating baselines and aggregates.
>>>
>>> cons:
>>> Fetching the 1hr data will involve multiple queries to Cassandra. In terms of
performance this is suboptimal and could become a performance issue as the number of
schedules that have 1hr data for the previous hour increases.
>>>
>>>
>>> * Option 2 - Put a new index in place
>>> We could implement a new index that optimizes querying for 1hr data from the
previous hour.
>>>
>>> pros:
>>> The index will allow us to much more efficiently load all of the data with a
single query. Additional queries would only be necessary for paging the data but with row
caching enabled, after the initial read subsequent reads will come directly from memory
making them very fast. Not as many code changes required to support this as compared to
the latter options.
>>>
>>> cons:
>>> The index would be implemented as custom index which means another column
family/table to maintain. This means that when we insert new data into the 1hr table, we
have to also update the index. The index will take up additional disk space and will
divert CPU cycles away from Cassandra doing other work. The querying will be substantially
faster that option 1, but loses out to options 3 and 4.
>>>
>>> option 3 - Altogether avoid querying for 1hr data
>>> OOBs are calculated when the data purge job runs. Prior to OOBs, aggregates,
and baselines are calculated. As stefan astutely pointed out, we already have the 1hr data
in memory that is needed for calculating the OOBs.
>>>
>>> pros:
>>> Avoids the query/index overhead of options 1 and 2.
>>>
>>> cons:
>>> Will require a good deal of implementation change. The code that currently
generates aggregates basically does it one big batch operation. The same holds with the
Cassandra implementation. We need to have that code return the generated 1hr aggregates so
that it can be made available to MeasurementOOBManagerBean. That is simple enough;
however, the simple approach is not a scalable approach. As the number of schedules
increases so does the number of 1hr aggregates that we are holding onto in memory. A safer
solution is to do it in chunks. First, we generate aggregates for the first N schedules
and pass those results onto MeasurementOOBManagerBean, then repeat for the next N
schedules, and so on. This will involve a fair amount of change and like options 1 and 2,
all of the work (aggregation, baseline, OOBs) is still done serially in a single thread
unlike option 4.
>>>
>>> option 4: Altogether avoid querying for 1hr data and do calculations
concurrently
>>> The primary difference between this and option 3 is that this one would be
implemented with message passing. Since we currently cannot use CDI due to portal-war that
means JMS. Once portal-war is gone, then it would be worth considering CDI events with EJB
async methods for a more lightweight approach.
>>>
>>> pros:
>>> Avoids the query/index overhead of options 1 and 2. Components will be more
granular and very loosely coupled, making it easier to write unit tests. It is difficult
to write tests for some of the existing metrics code in part because automated tests were
not written along side that code. This approach provides much better throughput than the
other options as well as the existing implementation. To illustrate, suppose the container
maintains a pool of 10 threads to run MDBs. If there is enough work to do during a given
run of the data purge job, we can easily pipeline it to utilize those threads resulting in
a higher level of throughput that will help us keep us with those larger inventories that
produce lots of metrics and provided the impetus to migrate our metrics storage to
Cassandra in the first place. Lastly, the JMS solution should scale very nicely for those
users who run multiple RHQ servers.
>>>
>>> cons:
>>> Of all the options this involves the most implementation change. I am not up
to speed on the pros/cons with JMS in AS 7, but that is something we would definitely have
to consider. I am not sure what if any issues there are with JMS and Arquillian. If JMS
functionality is not well supported with Arquillian then automated integration testing
will be more challenging. If this turns out to be the case, the CDI events + asyn EJB
approach might be more favorable.
>>>
>>>
>>> - John
>>> _______________________________________________
>>> rhq-devel mailing list
>>> rhq-devel(a)lists.fedorahosted.org
>>>
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>>>
>>
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel(a)lists.fedorahosted.org
>>
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>
> _______________________________________________
> rhq-devel mailing list
> rhq-devel(a)lists.fedorahosted.org
>
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
_______________________________________________
rhq-devel mailing list
rhq-devel(a)lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel