cassandra schema management
by John Sanda
Today I started updating the version of Liquibase we build against. We had been building off of a fork that had my changes that were needed to move forward with Cassandra. Those changes had recently been merged in upstream. RHQ itself does not have a direct dependency on those changes. The Liquibase extensions that add support for Cassandra require those changes. Those extension live at https://github.com/jsanda/cassandra-liquibase-ext. Since the Liquibase changes are not yet in a released version, we have to build Liquibase from source. Once I updated the scripts to build from upstream, I was no longer able to build cassandra-liquibase-ext. I started getting a number of compiler errors. I spent a good deal of time this afternoon trying to reconcile the changes. Some classes that we were using and even extending have been completely removed from the Liquibase core. I posted a question about it on the Liquibase developer forum. There has yet to be a response.
Taking everything into consideration, I think it is time to cut our losses with Liquibase and pursue another option. It is unsettling at best that classes and APIs that are intended to be used for extending Liquibase have been completely removed, and I haven't been able to find so much as a mention of it in the git history, let alone a note on the developer forum. Aside from that, we are using the Cassandra JDBC driver in cassandra-liquibase-ext. As best I can tell, the driver project has been abandoned. There haven't been any commits since November. If we continue down the Liquibase path, then we pretty much have to assume ownership of the JDBC driver. The driver is far from fully implemented, and it was implemented using Thrift APIs. Everywhere else we are using the DataStax driver which is actively maintained and already had a larger community than the JDBC driver had. The DataStax driver is not a JDBC driver, but it makes a lot more sense to be using it. It is worth noting that the Liquibase/Cassandra JDBC driver proof of concept was already done before the DataStax driver was even announced. If the DataStax driver was available when I started this work, we might have gone down a different path from the get go.
For the short term (next couple days), I am going to take out the Liquibase stuff, and implement minimal support for installing the Cassandra schema so we are not blocked on this any longer than we have to. Going forward we can figure out the long term solution.
- John
11 years, 2 months
AS7 plugin and threads subsystem
by Thomas Segismont
Hi,
Recently a user asked on the forum how he could monitor "active
connections" on a AS7 web connector. Currently there is no exposed
property in the JBoss Web subsystem and the only way to do so is:
1. to create a thread pool in the Threads subsystem and set this thread
pool as "executor" in the connector configuration.
2. to schedule measurement of the "active-count" metric of the thread
pool resource
As I was looking for a solution I found that:
* there are actually 6 thread pool types available in AS7
* RHQ lets you create only 4 of them
* among these 4 types only two of them expose an "active-count" attribute
* AS7 plugin define a unique set of metrics and resource configuration
properties regardless of the underlying thread pool type.
You will find attached:
* a spreadsheet with all AS7 thread pool types, their management
attributes and the parameters you can supply to create them
* a text file I got from JBoss CLI to create the spreadsheet (modified
to keep only the part we're interested in here).
Not listed in the spreadsheet is a new attribute for thread pools with
queues (see https://issues.jboss.org/browse/AS7-5448). This one may me
monitored when the active-count attribute is not available.
I think we should let users create all types of thread pools and provide
dedicated metrics and resource configuration properties for each of them.
At some point in the future, the thread subsystem may be removed (see
http://lists.jboss.org/pipermail/jboss-as7-dev/2013-February/007522.html).
We may follow this closely as JBoss web (and others subsystems) will
then define their own attributes and operations.
What's your opinion?
Thanks and regards,
Thomas
11 years, 2 months
Recent commit requires reinstall of agent in RHQ-4.6.0-SNAPSHOT
by Lukas Krejci
Hi,
this is of any concern to you only if you're using a 4.6.0-SNAPSHOT.
If you built your agent from code prior to commit
2910d155e509b6ebccdc9483a70b2910bdf0133c (which is a fix for
http://bugzilla.redhat.com/show_bug.cgi?id=907558), you need to rebuild at
least modules/core/plugin-api and modules/plugins/jboss-as-7 and put them in
your agent in lib or plugins respectively.
Or alternatively, you just rebuild everything and reinstall your agent.
Without that, you can see
java.lang.NoSuchMethodError:org.rhq.modules.plugins.jbossas7.AS7CommandLine.isArgumentsParsed()
in your agent.log coming from the AS7 plugin.
Cheers,
Lukas
11 years, 2 months
Workaround Sigar getProcState bug
by Thomas Segismont
Hi all,
After reading the Sigar C code, I eventually found why consecutive calls
to getProcState could return different values. You'll find the details
on the VMWare forum[1]. In a few words, Sigar caches return values for 2
seconds but the cache is not purged if the process has died.
I am now pretty sure that we don't need to observe a two seconds
interval between calls to refresh. We only need to serialize calls to
refresh and stop calling Sigar if the process has already been reported
down.
I have made a changeset[2] for this in a topic branch. Apache
integration tests pass (I initially had found problem running these) and
rhq-probe job is currently running.
Please reply if you have any comment.
Regards,
Thomas
[1] http://communities.vmware.com/message/2187972#2187972
[2]
http://git.fedorahosted.org/cgit/rhq/rhq.git/commit/?h=tsegismont%2FProce...
11 years, 2 months
changes for OOB calculations
by John Sanda
The current implementation for calculating OOBs involves a complex query that reads the 1hr metric data table to get data from the last hour. Since the metric data tables, including the 1 hr table, are being ported to Cassandra, the implementation for calculating OOBs necessarily has to change. Even with CQL querying in Cassandra is significantly different than SQL. First and foremost, there are no joins. Stefan and I have been reviewing some different design options and wanted to solicit feedback.
* Option 1 - Leverage existing indexes we already put in place to get 1hr data
We already have some indexes in place in the Cassandra design that we could leverage to get the 1hr data.
pros:
Does not require any additional schema changes and avoids the overhead of updating and maintaining an additional index. Minimizes changes to the code base for calculating OOBs as well calculating baselines and aggregates.
cons:
Fetching the 1hr data will involve multiple queries to Cassandra. In terms of performance this is suboptimal and could become a performance issue as the number of schedules that have 1hr data for the previous hour increases.
* Option 2 - Put a new index in place
We could implement a new index that optimizes querying for 1hr data from the previous hour.
pros:
The index will allow us to much more efficiently load all of the data with a single query. Additional queries would only be necessary for paging the data but with row caching enabled, after the initial read subsequent reads will come directly from memory making them very fast. Not as many code changes required to support this as compared to the latter options.
cons:
The index would be implemented as custom index which means another column family/table to maintain. This means that when we insert new data into the 1hr table, we have to also update the index. The index will take up additional disk space and will divert CPU cycles away from Cassandra doing other work. The querying will be substantially faster that option 1, but loses out to options 3 and 4.
option 3 - Altogether avoid querying for 1hr data
OOBs are calculated when the data purge job runs. Prior to OOBs, aggregates, and baselines are calculated. As stefan astutely pointed out, we already have the 1hr data in memory that is needed for calculating the OOBs.
pros:
Avoids the query/index overhead of options 1 and 2.
cons:
Will require a good deal of implementation change. The code that currently generates aggregates basically does it one big batch operation. The same holds with the Cassandra implementation. We need to have that code return the generated 1hr aggregates so that it can be made available to MeasurementOOBManagerBean. That is simple enough; however, the simple approach is not a scalable approach. As the number of schedules increases so does the number of 1hr aggregates that we are holding onto in memory. A safer solution is to do it in chunks. First, we generate aggregates for the first N schedules and pass those results onto MeasurementOOBManagerBean, then repeat for the next N schedules, and so on. This will involve a fair amount of change and like options 1 and 2, all of the work (aggregation, baseline, OOBs) is still done serially in a single thread unlike option 4.
option 4: Altogether avoid querying for 1hr data and do calculations concurrently
The primary difference between this and option 3 is that this one would be implemented with message passing. Since we currently cannot use CDI due to portal-war that means JMS. Once portal-war is gone, then it would be worth considering CDI events with EJB async methods for a more lightweight approach.
pros:
Avoids the query/index overhead of options 1 and 2. Components will be more granular and very loosely coupled, making it easier to write unit tests. It is difficult to write tests for some of the existing metrics code in part because automated tests were not written along side that code. This approach provides much better throughput than the other options as well as the existing implementation. To illustrate, suppose the container maintains a pool of 10 threads to run MDBs. If there is enough work to do during a given run of the data purge job, we can easily pipeline it to utilize those threads resulting in a higher level of throughput that will help us keep us with those larger inventories that produce lots of metrics and provided the impetus to migrate our metrics storage to Cassandra in the first place. Lastly, the JMS solution should scale very nicely for those users who run multiple RHQ servers.
cons:
Of all the options this involves the most implementation change. I am not up to speed on the pros/cons with JMS in AS 7, but that is something we would definitely have to consider. I am not sure what if any issues there are with JMS and Arquillian. If JMS functionality is not well supported with Arquillian then automated integration testing will be more challenging. If this turns out to be the case, the CDI events + asyn EJB approach might be more favorable.
- John
11 years, 3 months