consistency of reads and performance

John Sanda jsanda at redhat.com
Thu Sep 20 19:28:39 UTC 2012


Some really good questions came up in the discussion around Cassandra. One of those questions involved the performance impact of a read that uses a higher consistency that would require hitting say 3 nodes instead of 1. Cassandra will perform a digest query to determine whether or not any nodes are out of sync for the key being requested. It returns the most up to date copy to the client and then in the background out of date nodes get updated. The digest query is an optimization that reduces network bandwidth. Instead of sending all the data over the wire, only the hash of that data is sent. For the most part the digest query still incurs the same overhead as the actual data query. So yes it is safe to assume that stronger consistency imposes greater overhead on reads that may result in some performance degradation. 

We will have to do some careful analysis on case by case basis to determine the right balance between consistency and performance. Looking at the bigger picture though, we need to effectively manage nodes so that 1) we can detect performance problems and 2) deal with them. We can track read latency on a per column family basis. While we could say if read latency >= X, then do Y, I think a more robust solution is to look at the rate of change. We might instead say if read latency increases by X% over some time interval then do Y where Y might be one of:

  * Increase the size of the key cache (key cache hits reduce disk seeks to at most 1)
  * Enable or increase the size of the row cache (row cache hit eliminates need to go to disk)
  * Run compaction
  * Increase cluster capacity

Not only should we be able to provide these courses of action, but we need to know or have a good idea of when a user should increase capacity versus increasing the size of the row cache. Or put another way, we don't want to alert the user that she should add nodes to the cluster when simply increasing the size of the row cache would suffice.

- John


More information about the rhq-devel mailing list