Storage Node - Disk Space Metric

John Sanda jsanda at redhat.com
Thu Jul 18 01:32:02 UTC 2013


On Jul 17, 2013, at 7:51 PM, Larry O'Leary <loleary at redhat.com> wrote:

> On Wed, 2013-07-17 at 16:51 -0400, Stefan Negrea wrote:
>> 
>> ----- Original Message -----
>>> From: "Larry O'Leary" <loleary at redhat.com>
>>> To: rhq-devel at lists.fedorahosted.org
>>> Sent: Tuesday, July 16, 2013 6:22:06 PM
>>> Subject: Re: Storage Node - Disk Space Metric
>>> 
>>> On Mon, 2013-07-15 at 18:58 -0400, John Sanda wrote:
>>>> On Jul 15, 2013, at 5:46 PM, Jiri Kremser <jkremser at redhat.com> wrote:
>>>> 
>>>>> Basically I am +1 on having both metrics, both can be useful for setting
>>>>> up custom alerts.
>>>>> 
>>>>> But I am concerned that the percentage based metric is not proper for
>>>>> alerting because of its relative nature. While it make sense when
>>>>> calculating the heap size based alerts, it is different in disk space,
>>>>> because differences in partition sizes could be enormous.
>>>>> "Calculated.SystemDiskSpaceUsedPercentage > 90%" may mean many things on
>>>>> different HW. I know that the alerts are adjustable and one could change
>>>>> the number to whatever she wants, but we should provide smart defaults.
>>>>> 
>>>>> What about having another one called for instance
>>>>> "Calculated.SystemDiskSpaceAvailable" that would be calculated as the
>>>>> sum of free spaces across all partitions where data dirs for C* are
>>>>> located? (because of the way how C* selects the data dir) And trigger
>>>>> the alert if the number is lower than X, where X could be the number of
>>>>> megabytes necessary for storing the data for, say, 7 days. Using this
>>>>> approach, we could have alert notifications saying something like "If
>>>>> you don't add more disks, you will run out of space in approx 1 week.".
>>>> 
>>>> A metric that considers the sum of free space across all partitions would
>>>> not address the increases in disk usage (albeit temporary) due to
>>>> compaction.
>>> 
>>> I am curious to how this is different then looking at a percentage on
>>> individual partitions? For example, when looking at disk usage for a
>>> traditional relational database, a warning is not seen until either disk
>>> space has run out or it has resulted in data being moved from one
>>> partition to another. The hope is that it would be the latter and you
>>> could then increase said partition's space or add new partitions. In the
>>> end, the database doesn't keep track of how much space is there and how
>>> much of the total space it has used up.
>>> 
>>> Perhaps we are trying to do too much? The expectation is that the user
>>> provides enough disk space for the storage node to function. Hopefully
>>> they will create an alert on their disk resources that will provide them
>>> with this information. If so, perhaps a metric that provides the sum of
>>> the free space across all the partitions used by the storage node is
>>> sufficient?
>>> 
>>> 
>> 
>> Cassandra is a little bit different than Postgres. The data files can
>> be stored in multiple directories. Because data is the only one that
>> grows significantly (compared to caches for example) it is the only
>> configuration option that accepts multiple directories. If a user
>> configures multiple data directories Cassandra selects the directory
>> on the partition with the most free space. Looking at the overall disk
>> space is not enough, that information needs to be used in conjunction
>> with the current Cassandra configuration. For this reason, we will
>> need a targeted metric and alert; alerts at the platform level would
>> be too broad and sometimes irrelevant. 
> 
> Actually, you can span PostgreSQL data files across
> directories/partitions. But I wasn't tying to compare what Cassandra
> needs to do to what a RDBMS does. Only that the user needs to ensure
> that there is enough space for the data store to do its work.
> 
>> Running data compaction will take significantly more space: "during
>> compaction, there is a temporary spike in disk space usage and disk
>> I/O." (http://www.datastax.com/docs/1.2/operations/compaction_compression). Even if there is enough space to store data files and accommodate a moderate increase in data, there might not be enough space to run compaction. So keeping track of the some sort of metric for the amount free disk space left on the partitions where data files is stored is important.  
>> 
>> 
>> I think the metrics and alerts align with our overall goal to
>> black-box Cassandra. If we provide all these out-of-box along with
>> instructions on what to do, then users do not have to learn the inner
>> workings of Cassandra.
> 
> From the sound of it, what is really needed is a way to know when the
> data store is in jeopardy of exceeding the available space on a single
> partition. Essentially a way to compare one metric against another along
> with a threshold calculation.
> 
> This could be doable assuming that you know how much space would be
> needed to perform compaction and compression in a worst case scenario.
> Are we talking about the data store potentially doubling in size as a
> worse case scenario? Couldn't the alert then just be based on:
> 
> (<total data store size> * 2.2) > <largest available space in a
> directory> = trigger alert
> 
> This assumes the data could grow to 2 times its size and would need to
> reside in a single file during compaction with a 20% overhead as a
> threshold -- configurable by the user of course but with a sane
> default. 
> 
> Then we could get away from reporting metrics on available space
> altogether. 
> 

During normal operation Cassandra will write out a new SSTable file to disk when 1) a memtable is flushed to disk and 2) during compaction. If Cassandra is configured to use multiple data directories, then for both of those cases it will select the partition having the most free space. Given that, suppose we are set up to write to two partitions, one of which is near capacity (80%) and the other is at say 5% capacity. Cassandra will always write new data files to the second partition. I think the alerting we have handles this appropriately.

You mentioned worse case scenario which is definitely what we need to think about. We are using size tiered compaction with the default settings. Cassandra automatically runs major compactions. When there are 4 (this number is configurable) data files for a table of similar size, a minor compaction is triggered. Those files will be merged into one new one. The old files will get deleted on a subsequent garbage collection. In cases of write-mostly work loads, the new file could be close to the total size of those 4 older files. That does not mean that the total data size doubles, but the total size on disk for that table may be close to double. 

There are also major compactions which have to be explicitly started via JMX. These will definitely double the size of the table on disk being compacted. My understanding is that major compactions are there as kind of a safety net which can hopefully avoid.

A while back I wrote up some analysis of compaction[1] which hopefully provides some additional insight. I will expand on that write up in the very near future.

[1] https://docs.jboss.org/author/display/RHQ47/Cassandra+Configuration+and+Tuning
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.fedorahosted.org/pipermail/rhq-devel/attachments/20130717/ae4200e5/attachment.html>


More information about the rhq-devel mailing list