cassandra data model, CQL/Thrift
by John Sanda
For the new Cassandra-based metrics backend, we are defining schema with CQL. CQL provides abstractions over the underlying physical storage model and data types used. This makes working with Cassandra a lot easier and also lowers the initial learning curve; however, there is a danger. If someone starts learning Cassandra and only learns CQL it could very easy to start thinking in terms of a relational model and designing schema, queries, etc. accordingly. Cassandra is a key/value store. For those getting started in the feature/cassandra-backend branch and anyone getting started with Cassandra, I encourage you to explore things from both the command line tools cqlsh and cassandra-cli. The former is all CQL and the latter uses Thrift APIs. I will provide examples below that help illustrate key things. I will indicate before each command whether it is CQL or CLI. The examples below assume you have installed Cassandra via the storage installer script (or through the rhqctl script) in the feature/cassandra-backend branch. If you don't want to build the branch in order to use those scripts, I can provide you with the necessary steps for configuring a stock Cassandra install.
# (CQL) First log into cqlsh and create the keyspace and then switch over to using it.
$ cqlsh -u cassandra -p cassandra
> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
> use test;
# (CQL) now let's create a table/column family for storing metrics. The table
# schema will allow for "wide rows" meaning it can have variable number of columns.
> CREATE TABLE metrics (
schedule_id int,
time timestamp,
type int,
value double,
PRIMARY KEY (schedule_id, time, type)
) WITH COMPACT STORAGE;
The above table definition looks very similar to SQL, but what it does is very different. A table or column family in Cassandra consists of rows and columns. Each row has a unique key. A column consists of a name and a value (along with some meta data such as a timestamp that Cassandra uses for conflict resolution). The first field in a primary key defines the row key. The additional columns in the primary key define a composite column name that is essentially used for grouping. A primary key with multiple fields is how you create "wide rows" with CQL, or put another way, rows with variable numbers of columns. The WITH COMPACT STORAGE clause will result in the table looking just as it would as if we defined it via Thrift from the CLI.
Now we will insert some data so that we can see how things look.
# (CQL) insert data from cqlsh
> insert into metrics (schedule_id, time, type, value) values (1, '2013-04-19', 0, 1.1);
> insert into metrics (schedule_id, time, type, value) values (1, '2013-04-19', 1, 1.2);
> insert into metrics (schedule_id, time, type, value) values (1, '2013-04-19', 2, 1.3);
> insert into metrics (schedule_id, time, type, value) values (2, '2013-04-19', 0, 2.1);
> select * from metrics;
schedule_id | time | type | value
-------------+--------------------------+------+-------
1 | 2013-04-19 00:00:00-0400 | 0 | 1.1
1 | 2013-04-19 00:00:00-0400 | 1 | 1.2
1 | 2013-04-19 00:00:00-0400 | 2 | 1.3
2 | 2013-04-19 00:00:00-0400 | 0 | 2.1
The above output looks just like it would with SQL. It returns 4 rows; however, there are actually only two rows in the metrics table. Now let's explore things from cassandra-cli to get a more detailed picture of what is happening.
# (CLI) log into cassandra-cli and switch over to use the test keyspace.
$ cassandra-cli -u cassandra -pw cassandra
> use test;
# (CLI) The list command is analogous to select *
> list metrics;
Using default limit of 100
Using default column limit of 100
-------------------
RowKey: 1
=> (column=2013-04-19 00\:00\:00-0400:0, value=1.1, timestamp=1366388467678000)
=> (column=2013-04-19 00\:00\:00-0400:1, value=1.2, timestamp=1366388474316000)
=> (column=2013-04-19 00\:00\:00-0400:2, value=1.3, timestamp=1366388486612000)
-------------------
RowKey: 2
=> (column=2013-04-19 00\:00\:00-0400:0, value=2.1, timestamp=1366388765600000)
Here we can clearly see that there are two and not four rows. The column name is the part following "Column=" and the value is the part following "value=". The column names for each column consist of a date and an integer (which identifies the type of metric). The timestamp at the end is meta data. The CLI output here reflects that actual physical storage model. Now we will insert some data from the CLI.
# (CLI) insert a couple columns
> set metrics[1]['2013-04-19:1'] = double('2.14');
> set metrics[2]['2013-04-19:1'] = double('3.14');
> list metrics;
Using default limit of 100
Using default column limit of 100
-------------------
RowKey: 1
=> (column=2013-04-19 00\:00\:00-0400:0, value=1.1, timestamp=1366388467678000)
=> (column=2013-04-19 00\:00\:00-0400:1, value=1.2, timestamp=1366388474316000)
=> (column=2013-04-19 00\:00\:00-0400:2, value=1.3, timestamp=1366388486612000)
-------------------
RowKey: 2
=> (column=2013-04-19 00\:00\:00-0400:0, value=2.1, timestamp=1366388765600000)
=> (column=2013-04-19 00\:00\:00-0400:1, value=2.14, timestamp=1366390824274000)
=> (column=2013-04-19 00\:00\:00-0400:2, value=3.14, timestamp=1366390817649000)
Let's go back to cqlsh and run a query that filters on the schedule id.
# (CQL)
> select * from metrics where schedule_id = 1;
schedule_id | time | type | value
-------------+--------------------------+------+-------
1 | 2013-04-19 00:00:00-0400 | 0 | 1.1
1 | 2013-04-19 00:00:00-0400 | 1 | 1.2
1 | 2013-04-19 00:00:00-0400 | 2 | 1.3
The query is filtering on the row key which means we only querying against a single row. Queries typically should be designed to read a single row (or a subset of the row).
- John
11 years
Quick Poll on a naming issue...
by Jay Shaughnessy
We weren't sure what was best so we're asking for feedback for naming
the new control stuff. There are currently files (and one property) named:
rhqctl.sh
rhqctl.properties
rhqctl.log
-Drhqctl.properties-file
The abbreviation is obviously shorter, and maybe more attractive to the linux population.
An alternative would be:
rhqctl.sh --> rhq-control.sh
rhqctl.properties --> rhq-control.properties
rhqctl.log --> rhq-control.log
-Drhqctl.properties-file --> -Drhq.control.properties-file
This is more aligned to much of our current naming.
Please let us know, thanks.
11 years
RHQ now builds against JBoss EAP 6.1 alpha 1
by Heiko W.Rupp
Hi,
I have just pushed a change to master that makes RHQ build against JBoss EAP 6.1 alpha 1
instead of JBoss AS 7.1.1
To use this change, you need to blast away the dev-container and rebuild it by
using mvn -Pdev,enterprise
As always please report issues in Bugzilla.
Thanks
Heiko
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschaeftsführer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera
11 years
Re: Re: Runtime exception: Cannot initialize the scheduler!
by Christopher Keller
It was a clean build into a new dev-container. I actually cleaned out my repository and built again to make sure it wasn't some problem with dependencies. One of my colleagues also ran into the issue (on a different machine) and did a little more debugging. They indicated it looks like the configuration file is being truncated when it is read using the stax parser. I will supply you with more info shortly.
11 years
cassandra cluster configuration for dev-container
by John Sanda
If you are not working in or running out of the feature/cassandra-backend branch feel free to ignore this as the following info is specific to the feature/cassandra-backend branch right now.
There has been a good bit of infrastructure put in place for deploying a Cassandra cluster for automated tests and for the dev-container. When you build and deploy the dev-container, a cluster will automatically be deployed and started for you. There is nothing extra you have to do to get a functioning dev-container. That deployment code that is currently used was implemented well before rhqctl. The rhqctl script does not yet have support for dev-container deployments. I plan to update rhqctl so that it can be used for both prod and dev deployments. Then we have a common set of tools across deployments. The other benefit of using rhqctl is that it offers a lot more functionality than what is currently used for dev-container deployments.
We have been using out of the box heap settings for Cassandra. These are determined by the cassandra-env.sh script which is part of the Cassandra distro. There is no equivalent script for Windows so I'm not sure defaults are used with Windows. The cassandra-env.sh script will try to use 1/4 of your RAM for the heap; so, if you have 8 GB, it will give Cassandra 2 GB. For dev-container deployments and certainly for automated tests, we can get by with much smaller heaps.
I pushed some changes that lower the dev-container defaults and also make them configurable. The default max heap for the dev-container cluster nodes is now 512 MB. You can change this by editing rhq-server.properties before starting your dev-container for the first time after it has been (re)built. Uncomment and set the rhq.cassandra.max.heap.size and rhq.cassandra.heap.new.size properties. If you want to change the heap settings after the initial deployment, then you will need to edit <RHQ_HOME>/cassandra/node{0,1}/conf/cassandra-env.sh.
- John
11 years
Test issues in the EAP 6.1alpha branch
by Heiko W.Rupp
Hi,
build with against eap 6.1 alpha 1 is working mostly well.
I am seeing some test failures in the server/itest-2 module in the
EAP 6.1alpha branch ( branch name bug/927868 ) - see below
The branch is as close to master as 50b6e9fc07b9cc0ec721 ("split the GWT files"
commit)
This list seems pretty consistent right now.
I will tomorrow try to find out what is going on .
If anyone has an idea where I may dig into this, please let me know.
Failed tests: doNotAllowSnapshotToBePinnedWhenDefinitionIsAttachedToPinnedTemplate(org.rhq.enterprise.server.drift.ManageSnapshotsTest): (..)
doNotDeletePluginIfDependentPluginIsNotAlsoDeleted(org.rhq.enterprise.server.resource.metadata.PluginManagerBeanTest): Expected an IllegalArgumentException when trying to delete a plugin with dependent plugins, got: javax.ejb.EJBException: java.lang.IllegalArgumentException: You must delete the following dependent plugins also: [PluginManagerBeanTestPlugin2]
Tests run: 654, Failures: 2, Errors: 0, Skipped: 4
or on the next run
Failed tests: singleMergedTest(org.rhq.enterprise.server.alert.AlertDefinitionWithComplexNotificationsTest)
doNotAllowSnapshotToBePinnedWhenDefinitionIsAttachedToPinnedTemplate(org.rhq.enterprise.server.drift.ManageSnapshotsTest): (..)
doNotDeletePluginIfDependentPluginIsNotAlsoDeleted(org.rhq.enterprise.server.resource.metadata.PluginManagerBeanTest): Expected an IllegalArgumentException when trying to delete a plugin with dependent plugins, got: javax.ejb.EJBException: java.lang.IllegalArgumentException: You must delete the following dependent plugins also: [PluginManagerBeanTestPlugin2]
upgradePluginWithTypesRemoved(org.rhq.enterprise.server.resource.metadata.ResourceMetadataManagerBeanTest): ResourceGroup with name ServerE Group already exists
singleMergedTest(org.rhq.enterprise.server.alert.AlertDefinitionWithComplexNotificationsTest) Time elapsed: 0.403 sec <<< FAILURE!
java.lang.NullPointerException
at org.rhq.enterprise.server.alert.AlertDefinitionWithComplexNotificationsTest.logout(AlertDefinitionWithComplexNotificationsTest.java:304)
at org.rhq.enterprise.server.alert.AlertDefinitionWithComplexNotificationsTest.singleMergedTest(AlertDefinitionWithComplexNotificationsTest.java:160)
Heiko
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschaeftsführer: Mark Hegarty, Charlie Peters, Michael Cunningham, Charles Cachera
11 years
Using GWT code splitting in RHQ
by Jiri Kremser
Hi,
the size of generated JavaScript is pretty large. Using the unobfuscated code (<gwt.style>PRETTY</gwt.style>) the size of the largest file is 2.8 MB. This file is downloaded immediately after successful login.
I was playing with the GWT code splitting [1] feature and split the coregui into 9 smaller pieces (more or less according to the top menu in the app). After this change, the largest chunk of JavaScript code is now 740 kB large (~ 1/4 of the original size, again unobfuscated). The files with the code are downloaded on demand as needed by user. It means that some parts needn't to be downloaded at all (the test page (#Test), help, reports, etc.). This change should shorten the loading time especially for users with slow connection and for mobile devices.
I haven't seen any negative impact yet, the dev mode keeps working with code splitting. If you have any concerns, please respond to this email. It is not merged to master branch yet.
[1]: https://developers.google.com/web-toolkit/doc/latest/DevGuideCodeSplitting
[2]: https://github.com/rhq-project/rhq-core/blob/master/modules/enterprise/gu...
11 years
Runtime exception: Cannot initialize the scheduler!
by Christopher Keller
I am trying to build rhq from the latest source. I followed the instructions on:
https://docs.jboss.org/author/display/RHQ/Building+RHQ
I am able to get a clean build, however when I run the rhq server in the dev-container I get the following runtime error:
21:49:14,612 ERROR [org.jboss.ejb3.invocation] (EJB default - 1) JBAS014134: EJB Invocation failed on component StartupBean for method public void org.rhq.enterprise.server.core.StartupBean.init() throws java.lang.RuntimeException: javax.ejb.EJBException: java.lang.RuntimeException: Cannot initialize the scheduler!
at org.jboss.as.ejb3.tx.CMTTxInterceptor.notSupported(CMTTxInterceptor.java:282) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
The root cause is of the scheduler initialization failure is:
Caused by: org.quartz.SchedulerException: ThreadPool class not specified.
at org.quartz.impl.StdSchedulerFactory.instantiate(StdSchedulerFactory.java:764) [quartz-1.6.5.jar:1.6.5]
at org.quartz.impl.StdSchedulerFactory.getScheduler(StdSchedulerFactory.java:1376) [quartz-1.6.5.jar:1.6.5]
at org.rhq.enterprise.server.scheduler.SchedulerService.initQuartzScheduler(SchedulerService.java:120) [rhq-enterprise-server-ejb3.jar:4.7.0-SNAPSHOT]
It looks like the quartz configuration is not being read correctly.
I am running Windows 7 and java 7:
java version "1.7.0_17"
Java(TM) SE Runtime Environment (build 1.7.0_17-b02)
Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
I have attached the full server.log file from a startup.
Thanks,
Chris
11 years
Animal sniffer failure in portal war module
by Thomas Segismont
Hi,
I activated animal sniffer plugin this morning, attached to the build's
"verify" phase. The build then failed on Jenkins in the portal war module.
On Jenkins, the "dist" profile is active and the Jetty jspc plugin gets
triggered. So the animal sniffer plugin fails because it doesn't find
references for the org/apache/jasper/runtime/* classes (see
http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/rhq-master-compile/14...)
To get this work we should whether add the Jasper dependencies in
provided scope or simply remove the Jetty jspc plugin.
In my opinion, the latter is preferable as most if not all those JSP are
not used anymore. Besides I'm not sure the precompiled JSP are used as
we do nothing with the web.xml file generated by the jspc plugin (see
http://docs.codehaus.org/display/JETTY/Maven+Jetty+Jspc+Plugin). How the
generated servlets classes could be used if they are not declared in the
war descriptor?
What's your opinion?
In the meantime, I reconfigured portal war to skip animal sniffer execution.
Thanks,
Thomas
11 years