Storage Node - Disk Space Metric

Stefan Negrea snegrea at redhat.com
Mon Jul 15 14:19:42 UTC 2013


Hello Everybody,

I would need feedback for the percentage of disk space used metric for RHQ Storage Nodes. The metric already went through three iterations and I think we could refine it more before releasing next RHQ version.


Background
- The metric is Disk Used Percentage
- It is a calculated metric
- Data files are Cassandra's largest disk space consumers
- Data files could be stored in one or more data directories, this is configurable by the user
- If multiple directories are configured, Cassandra will select the partition with most disk space available
- The size of all data files is available via the JMX interface
- Disk space available is a system/platform property but to report anything for Cassandra the focus needs to be on the partitions where data files are stored

Iteration 1 - delivered with RHQ 4.8
- The metric name:   Calculated.DiskSpaceUsedPercentage
- Calculation:
  + based on the overall disk usage, not just Cassandra (total disk used/total disk available)
  + if multiple directories used for data files, just return max percentage
- Problems:
  + the metric can be misleading, since it is reported in Cassandra sub-resource, yet it represents a platform metric
  + the selection of max is not representative on how Cassandra works, since it distributes data files across all the partitions available
  + using max will work correctly only in cases where there is only one data files location

Iteration 2 - Post RHQ 4.8
- The metric name:  Calculated.PartitionDiskSpaceUsedPercentage
- Calculation: 
  + an aggregate percentage of disk space used for all the partitions where Cassandra stores data files
  + similar to iteration 1, but it looks at the disk space used across all the partitions
  + example: for data files on two partitions:  metric value = (disk space used of partition_1 + partition_2)/(total disk space of partition_1 + partition_2)
- Problems:
  + like Iteration 1, the metric can be misleading since it is a platform metric but reported on Cassandra sub-resource
  + seeing a high percentage for disk utilization when you just installed a new RHQ Storage Node could be confusing - "Cassandra is using 42% of the disk already?"
  + the only way for users to understand what is going on is to look at the metric description, just looking at the table of values reported is not sufficient
- Positives:
  + representative to how Cassandra uses disk partitions
  + an alert on this metric is guaranteed to trigger every time the free disk space gets low

Iteration 3 - in Master 
- The metric name:  Calculated.PartitionDiskSpaceUsedPercentage
- Calculation:
  + an aggregate percentage of disk space used by Cassandra data files for all the partitions where Cassandra stores data files
  + similar to iteration 2, but it will not look at the overall disk space used, just at the disk space used by data files
  + eg. for data files on two partitions: metric value = (disk space used by data files)/(total disk space of partition_1 + partition_2)
- Problems:
  + it is hard to design an alert for this metric because there is no percentage to alert on that will guarantee trigger
  + for example:  if total disk space utilization pre-Cassandra is 60%, a value of 50% of the alert on this metric will never be trigger the alert because there is only 40% available disk
  + Cassandra data could grow until disk is full and never trigger an alert
- Positives:
  + representative to how Cassandra uses disk partitions
  + the value will no longer be misleading, when a new RHQ Storage Node is installed the disk used will low (eg. under 1%)
  + the user will not have to dig in documentation or report that the metric is wrong out of the box


Right now Iteration 3 is in master but I would need feedback since there is no perfect solution (both Iteration 2 and Iteration 3 have downsides). I like the correctness of the calculation for Iteration 3 but the alerting capability of Iteration 2. I do not see any easy solution to get both...


1) Should we include both metrics? Overall disk space use & data files disk space used? We could only hope that having the two metrics will make it easier for users to understand what they are.

2) Is iteration 2 good? The confusion concern is not warranted?

3) Could we place the calculation done for Iteration 2 on a sub-resource that would not guarantee confusion? 

4) Any other solution to have a self-documenting metric and a good alert template?



Thank you,
Stefan Negrea

Software Engineer



More information about the rhq-devel mailing list