conclusions on next generation metrics database

John Sanda jsanda at redhat.com
Tue Sep 18 16:28:58 UTC 2012


I want to provide a more succinct summary of what I think we should do and why with respect to a metrics database.  In short, I believe that Cassandra is the best fit for our needs. Its peer-to-peer architecture that provides high availability and a high level of fault tolerance are good fit for our needs.

As folks may recall, Jay and I spent some time doing some investigation of Infinispan. We came to the conclusion that it was not a good fit due to the API mismatch.  A cache/map API does not lend itself well to our functional requirements for things like range queries and paging. It was going to be a challenge to implement and efficiently support critical features like paging and indexing; consequently, we decided to move on in the investigation process.

I have provided some detailed notes on and have had discussions around MongoDB and Cassandra. That information can be found at https://docs.jboss.org/author/display/RHQ/Databases. That document has since been updated with information on HBase as well. The biggest factor in my mind is manageability. The ideal success scenario is this. A user installs a new version of RHQ that includes a new metrics database. Over the course of time in using and operating RHQ, the user is never aware of what exactly that metrics database is. It is just another component of the RHQ installation.

MongoDB uses a form of master/slave architecture for replication called replica sets. You have one primary node that receives all writes and one or more secondary nodes that can receive reads but do not receive writes. Replica sets provide automated failover such that if the primary goes down, a secondary will take over as primary. It is highly recommended to run replica sets in production (as opposed to a single server). Some routine maintenance tasks require locking some or all of the database during those maintenance operations. With a single server, we potentially have to take the database offline to perform those tasks. With replica sets, you can do the maintenance on a secondary and then promote it to be the primary while the former primary is made a secondary to undergo maintenance. Contrast this with Cassandra where nodes are equal having no special roles. When I perform maintenance on a Cassandra node, there is no need to assign or reassign roles. This simplifies the operational complexity as well as reduces the number of scenarios for which we have to test.

To further illustrate the point about operational complexity, one of the scenarios we would have to consider with MongoDB is what to do if there a failure in assigning a node to be the primary. This would likely result in some down time where we might have to forcibly assign a primary. We do not have to deal with this scenario with Cassandra.

MongoDB's solution for scaling writes is sharding. There are a number of components (where each component is a separate process) involved with sharding that includes two or more replicate sets, config servers, and routers. Let's assume that the user is already running a replica set when he reaches the point where he wants to introduce sharding. We have to set up at least one more replica set, config servers, and one or more mongos routers. There are lots of testing and failure scenarios we have to consider with this. For example, suppose the user runs a single mongos process that goes down. Can any client requests be serviced? Suppose we have shards A, B, and C, but the cluster fails to recognize shard C. How do we detect and fix that? What happens if a config server goes down? In the worst case, if all config servers go down the entire cluster becomes unavailable. There are plenty of scenarios to consider with Cassandra as well, but its P2P architecture where all nodes are the same is a simplifying factor.

Keep in mind that with sharding, MongoDB still adheres to its master/slave architecture. That is while I now have multiple primary nodes (one per each shard/replica set), every write for metric schedule ID 123 will go to the same node. Again with Cassandra, writes can go to any nodes. Let's say we are using a replication_factor of 3 with Cassandra, meaning 3 nodes store data for metric schedule ID 123. Writes will be distributed among those three nodes as opposed to going to a single node.

Another big advantage I feel that Cassandra us in particular over MongoDB is the fact that Cassandra is implemented in Java. We are all Java developers who use Java IDEs and tools. Being able to load the code into your IDE and step through it in a debugger along side RHQ code is significant. We can also leverage familiar tools like MAT (http://www.eclipse.org/mat/).

HBase has been included in the discussion more recently. Like Cassandra, HBase follows the BigTable data model. HBase uses HDFS for data storage. HDFS, or Hadoop Distributed File System, is the same file system used for running Hadoop MapReduce jobs. An HDFS cluster consists of a name node, secondary name node, and data nodes. The name node is a single point of failure (SPoF) in HDFS. The secondary name node does not provide automated failover. HBase runs on top of HDFs. It consists of one or more master servers, multiple region servers, and an internal ZooKeeper instance. Data in HBase is split into regions which are distributed throughout HDFS. A region server is responsible for one or more regions. 

The master server provides meta data operations like data splitting and schema changes. ZooKeeper is to carry out these operations. HBase uses a master/slave architecture where all client requests for a given region go to the same region server. Updates are replicated to secondary servers, but both reads and writes go to the primary region server.

The SPoF with the HDFS name node is significant in terms of management. If the name node goes down, both HBase and HDFS can become unavailable. In addition to unexpected failures, we also have to take into consideration planned down time. Suppose the user wants to update the kernel on the machine on which the name node is running. This very likely involves a reboot. Does that mean our whole metrics backend goes offline during that time? What if there are problems with bringing the machine back online, think motherboard ;-) Do we have a contingency plan for bringing up the name node on another machine? With Cassandra the scenarios are fewer and simpler due to its P2P, highly available architecture.

I have categorized HBase as complex due to the various components involved. I think our migration from JSF/Seam to GWT/SmartGWT is a fair, accurate analogy in this respect. From a development perspective, you have to deal with not only HBase, but also HDFS and ZooKeeper. With Cassandra (or MongoDB for that matter), you have one set of APIs.

The HBase architecture provides strong consistency. All reads/writes for data in a region go to the same region server. This aspect of the architecture help provide strong consistency. It also has a couple other noteworthy implications. 1) It impacts the ability to scale or distribute writes since all requests for data in a given region go to the same region server. This can lead to hot spots with respect to client request load. Since reads and writes in Cassandra can go to any node, we can more easily and effectively distribute that load. 2) Favoring strong consistency reduces availability. When a region server goes down, its regions become unavailable. You cannot read data from or write data to the regions managed by that server. The master server will eventually detect the down server and reassign the regions to a new region server. For our use cases, we want to favor availability over consistency. We want the database to always be available to receive metric data and events. "Eventually consistent" semantics work fine for us; however, consistency is tunable in Cassandra. Consistency requirements can be specified in a very granular way, with each individual read or write operation. In places where we want stronger consistency, we can have it.

Lastly, I want to point out that HBase is not Hadoop. It uses HDFS for data storage, the same file system Hadoop uses for running MapReduce jobs. HBase queries can be implemented with MapReduce. Those are the big highlights of the tight integration. This means you can use HBase without using Hadoop and vice versa. You can also use Cassandra or MongoDB with Hadoop.


References:
* original email with comparison and conclusions - https://lists.fedorahosted.org/pipermail/rhq-devel/2012-September/002053.html
* wiki doc with details on databases - https://docs.jboss.org/author/display/RHQ/Databases
* git repo with Cassandra prototype - https://github.com/jsanda/rhq-metrics-plugins
* my blog posts on working with Cassandra - http://johnsanda.blogspot.com/search/label/cassandra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.fedorahosted.org/pipermail/rhq-devel/attachments/20120918/26a3830c/attachment.html>


More information about the rhq-devel mailing list