[anaconda:master 1/2] RAID related changes for custom spoke.

mulhern amulhern at redhat.com
Mon Jun 2 20:35:30 UTC 2014


- RAID objects instead of strings are now used everywhere. Previously,
the approach was a bit more hybrid, with strings floating around here
and there.

- Move get_supported_raid_levels to blivet.devicefactory. This method
is supposed to yield the raid levels that blivet supports, not necessarily
those that anaconda allows. The return type is changed to a set of RAID
objects during the move and the return values are somewhat modified.
* btrfs substitutes single for none
* RAID adds container and linear

- Currently RAID levels are only offered on the Manual Partitioning screen
if the user has selected RAID. So, the only options that are ever shown are
one that RAID allows. There is little point in having other always invisible
options, so there are none left in the custom.glade file.
Previously there was "none".

- In CustomPartitioningSpoke._validate_mountpoint RAID related validation
tests have been generalized a bit.

- custom_storage_helpers.get_raid_level replaces
blivet.devicefactory.get_raid_level. The return type is changed to a
RAIDLevel object.

- In _save_right_side the container raid level validation has been
generalized a bit.

- CustomPartitioningSpoke._raid_level_visible has been generalized to
use raidLevelsSupported().

- CustomPartitiongSpoke._populate_raid has been simplified a bit,
and generalized a bit.

- In on_device_type_changed some general methods are used. Previously the
code sets btrfs's raid_level and passed it to _populate_raid(), but
_populate_raid() never displays any RAID choices for btrfs on manual
partitoning screen, so this code had no effect. Btrfs RAID choices are
shown on the "Configure Volume" screen, reached by pressing the Modify
button. The effectless code is just removed. This leaves a few unused
identifiers, which are also removed.

- In the "Configure Volume/Configure Volume Group" window, "single" is added
for btrfs.

- selectedRaidLevel() is changed so that it returns a RAIDLevel object or
None, rather than a string.

- Several methods are added to custom_storage_helpers.py. They are small
and copiously commented.

- ContainerDialog._raid_level_visible can be simplified by using
containerRaidLevelsSupported(), one of the newly introduced methods.

- ContainerDialog._populate_raid() can also be simplified a bit.

- Use StorageDevice.raw_device where appropriate.

Signed-off-by: mulhern <amulhern at redhat.com>
---
 pyanaconda/storage_utils.py                        |  17 ---
 pyanaconda/ui/gui/spokes/custom.glade              |   4 -
 pyanaconda/ui/gui/spokes/custom.py                 |  93 +++++++------
 .../ui/gui/spokes/lib/custom_storage_helpers.glade |   4 +
 .../ui/gui/spokes/lib/custom_storage_helpers.py    | 147 +++++++++++++++++----
 5 files changed, 173 insertions(+), 92 deletions(-)

diff --git a/pyanaconda/storage_utils.py b/pyanaconda/storage_utils.py
index 6478282..f353c6d 100644
--- a/pyanaconda/storage_utils.py
+++ b/pyanaconda/storage_utils.py
@@ -40,18 +40,6 @@ from pykickstart.constants import AUTOPART_TYPE_LVM, AUTOPART_TYPE_LVM_THINP
 import logging
 log = logging.getLogger("anaconda")
 
-# should this and the get_supported_raid_levels go to blivet.devicefactory???
-SUPPORTED_RAID_LEVELS = {DEVICE_TYPE_LVM: {"none", "raid0", "raid1"},
-                         DEVICE_TYPE_LVM_THINP: {"none", "raid0", "raid1"},
-                         DEVICE_TYPE_MD: {"raid0", "raid1", "raid4", "raid5",
-                                          "raid6", "raid10"},
-                         DEVICE_TYPE_BTRFS: {"none", "raid0", "raid1",
-                                             "raid10"},
-                         # no device type for LVM VG
-                         # VG: {"none", "raid0", "raid1", "raid4",
-                         #      "raid5", "raid6", "raid10"},
-                        }
-
 # TODO: all those constants and mappings should go to blivet
 DEVICE_TEXT_LVM = N_("LVM")
 DEVICE_TEXT_LVM_THINP = N_("LVM Thin Provisioning")
@@ -114,11 +102,6 @@ def size_from_input(input_str):
 
     return size
 
-def get_supported_raid_levels(device_type):
-    """Get supported RAID levels for the given device type."""
-
-    return SUPPORTED_RAID_LEVELS.get(device_type, set())
-
 def device_type_from_autopart(autopart_type):
     """Get device type matching the given autopart type."""
 
diff --git a/pyanaconda/ui/gui/spokes/custom.glade b/pyanaconda/ui/gui/spokes/custom.glade
index 859712b..4a4900a 100644
--- a/pyanaconda/ui/gui/spokes/custom.glade
+++ b/pyanaconda/ui/gui/spokes/custom.glade
@@ -59,10 +59,6 @@
     </columns>
     <data>
       <row>
-        <col id="0" translatable="yes">None</col>
-        <col id="1">none</col>
-      </row>
-      <row>
         <col id="0" translatable="yes">RAID0 &lt;span foreground="grey"&gt;(Performance)&lt;/span&gt;</col>
         <col id="1">raid0</col>
       </row>
diff --git a/pyanaconda/ui/gui/spokes/custom.py b/pyanaconda/ui/gui/spokes/custom.py
index 328cf61..5a25a12 100644
--- a/pyanaconda/ui/gui/spokes/custom.py
+++ b/pyanaconda/ui/gui/spokes/custom.py
@@ -50,7 +50,6 @@ from blivet.devicefactory import DEVICE_TYPE_PARTITION
 from blivet.devicefactory import DEVICE_TYPE_MD
 from blivet.devicefactory import DEVICE_TYPE_DISK
 from blivet.devicefactory import DEVICE_TYPE_LVM_THINP
-from blivet.devicefactory import get_raid_level
 from blivet.devicefactory import SIZE_POLICY_AUTO
 from blivet import findExistingInstallations
 from blivet.partitioning import doAutoPartition
@@ -60,11 +59,11 @@ from blivet.errors import NotEnoughFreeSpaceError
 from blivet.errors import SanityError
 from blivet.errors import SanityWarning
 from blivet.errors import LUKSDeviceWithoutKeyError
-from blivet.devicelibs import mdraid
+from blivet.devicelibs import raid
 from blivet.devices import LUKSDevice
 
-from pyanaconda.storage_utils import get_supported_raid_levels, ui_storage_logger, device_type_from_autopart
-from pyanaconda.storage_utils import DEVICE_TEXT_PARTITION, DEVICE_TEXT_MD, DEVICE_TEXT_MAP
+from pyanaconda.storage_utils import ui_storage_logger, device_type_from_autopart
+from pyanaconda.storage_utils import DEVICE_TEXT_PARTITION, DEVICE_TEXT_MAP
 from pyanaconda.storage_utils import PARTITION_ONLY_FORMAT_TYPES, MOUNTPOINT_DESCRIPTIONS
 from pyanaconda.storage_utils import NAMED_DEVICE_TYPES, CONTAINER_DEVICE_TYPES
 
@@ -78,7 +77,8 @@ from pyanaconda.ui.gui.spokes.lib.refresh import RefreshDialog
 from pyanaconda.ui.gui.spokes.lib.summary import ActionSummaryDialog
 
 from pyanaconda.ui.gui.spokes.lib.custom_storage_helpers import size_from_entry
-from pyanaconda.ui.gui.spokes.lib.custom_storage_helpers import validate_label, validate_mountpoint, selectedRaidLevel
+from pyanaconda.ui.gui.spokes.lib.custom_storage_helpers import validate_label, validate_mountpoint, get_raid_level
+from pyanaconda.ui.gui.spokes.lib.custom_storage_helpers import selectedRaidLevel, raidLevelSelection, defaultRaidLevel, requiresRaidSelection, containerRaidLevelsSupported, raidLevelsSupported, defaultContainerRaidLevel
 from pyanaconda.ui.gui.spokes.lib.custom_storage_helpers import get_container_type_name, RAID_NOT_ENOUGH_DISKS
 from pyanaconda.ui.gui.spokes.lib.custom_storage_helpers import AddDialog, ConfirmDeleteDialog, DisksDialog, ContainerDialog, HelpDialog
 
@@ -609,6 +609,16 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
 
     def _validate_mountpoint(self, mountpoint, device, device_type, new_fs_type,
                             reformat, encrypted, raid_level):
+        """ Validate various aspects of a mountpoint.
+
+            :param str mountpoint: the mountpoint
+            :param device: blivet.devices.Device instance
+            :param int device_type: one of an enumeration of device types
+            :param str new_fs_type: string representing the new filesystem type
+            :param bool reformat: whether the device is to be reformatted
+            :param bool encrypted: whether the device is to be encrypted
+            :param raid_level: instance of blivet.devicelibs.raid.RAIDLevel or None
+        """
         error = None
         if device_type != DEVICE_TYPE_PARTITION and mountpoint == "/boot/efi":
             error = (_("/boot/efi must be on a device of type %s")
@@ -623,14 +633,16 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
             error = _("%s cannot be encrypted") % new_fs_type
         elif mountpoint == "/" and device.format.exists and not reformat:
             error = _("You must create a new filesystem on the root device.")
-        elif device_type == DEVICE_TYPE_MD and raid_level in (None, "single"):
-            error = _("Devices of type %s require a valid RAID level selection.") % _(DEVICE_TEXT_MD)
 
-        if not error and raid_level not in (None, "single"):
-            md_level = mdraid.getRaidLevel(raid_level)
-            min_disks = md_level.min_members
+        if not error and \
+           (raid_level is not None or requiresRaidSelection(device_type)) and \
+           raid_level not in raidLevelsSupported(device_type):
+            error = _("Device does not support RAID level selection %s.") % raid_level
+
+        if not error and raid_level is not None:
+            min_disks = raid_level.min_members
             if len(self._device_disks) < min_disks:
-                error = _(RAID_NOT_ENOUGH_DISKS) % {"level": md_level,
+                error = _(RAID_NOT_ENOUGH_DISKS) % {"level": raid_level,
                                                     "min" : min_disks,
                                                     "count": len(self._device_disks)}
 
@@ -852,9 +864,7 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
         old_device_info = dict()
 
         new_device_info["device"] = device
-        use_dev = device
-        if device.type == "luks/dm-crypt":
-            use_dev = device.slave
+        use_dev = device.raw_device
 
         log.info("ui: saving changes to device %s", device.name)
 
@@ -1031,8 +1041,8 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
         changed_container_encrypted = (container_encrypted != old_container_encrypted)
 
         container_raid_level = self._device_container_raid_level
-        if container_raid_level == "single" and device_type != DEVICE_TYPE_BTRFS:
-            container_raid_level = None
+        if container_raid_level not in containerRaidLevelsSupported(device_type):
+            container_raid_level = defaultContainerRaidLevel(device_type)
 
         old_device_info["container_raid_level"] = old_container_raid_level
         new_device_info["container_raid_level"] = container_raid_level
@@ -1175,26 +1185,28 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
 
     def _raid_level_visible(self, model, itr, user_data):
         device_type = self._get_current_device_type()
-        raid_level = model[itr][1]
-        return raid_level in get_supported_raid_levels(device_type)
+        raid_level = raid.getRaidLevel(model[itr][1])
+        return raid_level in raidLevelsSupported(device_type)
 
     def _populate_raid(self, raid_level):
-        """ Set up the raid-specific portion of the device details. """
+        """ Set up the raid-specific portion of the device details.
+
+            :param raid_level: RAID level
+            :type raid_level: instance of blivet.devicelibs.raid.RAIDLevel or None
+        """
         device_type = self._get_current_device_type()
         log.debug("populate_raid: %s, %s", device_type, raid_level)
 
-        if device_type == DEVICE_TYPE_MD:
-            base_level = "raid1"
-        else:
+        if not raidLevelsSupported(device_type):
             map(really_hide, [self._raidLevelLabel, self._raidLevelCombo])
             return
 
-        if not raid_level:
-            raid_level = base_level
+        raid_level = raid_level or defaultRaidLevel(device_type)
+        raid_level_name = raidLevelSelection(raid_level)
 
         # Set a default RAID level in the combo.
         for (i, row) in enumerate(self._raidLevelCombo.get_model()):
-            if row[1] == raid_level:
+            if row[1] == raid_level_name:
                 self._raidLevelCombo.set_active(i)
                 break
 
@@ -1226,6 +1238,9 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
             self._device_container_encrypted = False
             self._device_container_size = SIZE_POLICY_AUTO
 
+        self._device_container_raid_level = self._device_container_raid_level \
+           or defaultContainerRaidLevel(devicefactory.get_device_type(use_dev))
+
     def _setup_fstype_combo(self, device):
         # remove any fs types that aren't supported
         remove_indices = []
@@ -1321,10 +1336,7 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
         log.debug("populate_right_side: %s", selector.device)
 
         device = selector.device
-        if device.type == "luks/dm-crypt":
-            use_dev = device.slave
-        else:
-            use_dev = device
+        use_dev = device.raw_device
 
         if hasattr(use_dev, "req_disks") and not use_dev.exists:
             self._device_disks = use_dev.req_disks[:]
@@ -1410,8 +1422,7 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
         else:
             self._sizeEntry.set_tooltip_text(_("This file system may not be resized."))
 
-        raid_level = devicefactory.get_raid_level(device)
-        self._populate_raid(raid_level)
+        self._populate_raid(get_raid_level(device))
         self._populate_container(device=use_dev)
         # do this last in case this was set sensitive in on_device_type_changed
         if use_dev.exists or use_dev.type == "btrfs volume":
@@ -2228,9 +2239,7 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
 
         self._encryptCheckbox.set_sensitive(active)
         if self._current_selector:
-            device = self._current_selector.device
-            if device.type == "luks/dm-crypt":
-                device = device.slave
+            device = self._current_selector.device.raw_device
 
             ancestors = device.ancestors
             ancestors.remove(device)
@@ -2266,8 +2275,8 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
                 return
 
             device = self._current_selector.device
-            if isinstance(device, LUKSDevice):
-                device = device.slave
+            if device:
+                device = device.raw_device
 
         container_size_policy = SIZE_POLICY_AUTO
         if device_type not in CONTAINER_DEVICE_TYPES:
@@ -2379,18 +2388,8 @@ class CustomPartitioningSpoke(NormalSpoke, StorageChecker):
             test_fmt = getFormat("btrfs")
             should_be_btrfs = test_fmt.supported and test_fmt.formattable
             fs_type_sensitive = False
-            with ui_storage_logger():
-                factory = devicefactory.get_device_factory(self._storage_playground,
-                                                         DEVICE_TYPE_BTRFS, 0)
-                container = factory.get_container()
-
-            if container:
-                raid_level = container.dataLevel or "single"
-            else:
-                # here I suppose we could alter the default based on disk count
-                raid_level = "single"
         elif new_type == DEVICE_TYPE_MD:
-            raid_level = "raid1"
+            raid_level = defaultRaidLevel(new_type)
 
         # lvm uses the RHS to set disk set. no foolish minds here.
         exists = self._current_selector and self._current_selector.device.exists
diff --git a/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.glade b/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.glade
index ecec01e..36ad8fa 100644
--- a/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.glade
+++ b/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.glade
@@ -25,6 +25,10 @@
         <col id="1">none</col>
       </row>
       <row>
+        <col id="0" translatable="yes">Single &lt;span foreground="grey"&gt;(No Redundancy, No Striping)&lt;/span&gt;</col>
+        <col id="1">single</col>
+      </row>
+      <row>
         <col id="0" translatable="yes">RAID0 &lt;span foreground="grey"&gt;(Performance)&lt;/span&gt;</col>
         <col id="1">raid0</col>
       </row>
diff --git a/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.py b/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.py
index ed5774d..d02d49a 100644
--- a/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.py
+++ b/pyanaconda/ui/gui/spokes/lib/custom_storage_helpers.py
@@ -24,7 +24,10 @@
 """Helper functions and classes for custom partitioning."""
 
 __all__ = ["size_from_entry", "populate_mountpoint_store", "validate_label",
-           "validate_mountpoint", "selectedRaidLevel", "get_container_type_name",
+           "validate_mountpoint", "get_raid_level",
+           "selectedRaidLevel", "raidLevelSelection",
+           "defaultRaidLevel", "requiresRaidSelection", "defaultContainerRaidLevel",
+           "containerRaidLevelsSupported", "raidLevelsSupported", "get_container_type_name",
            "AddDialog", "ConfirmDeleteDialog", "DisksDialog", "ContainerDialog",
            "HelpDialog"]
 
@@ -32,7 +35,7 @@ import re
 
 from pyanaconda.product import productName
 from pyanaconda.iutil import lowerASCII
-from pyanaconda.storage_utils import size_from_input, get_supported_raid_levels
+from pyanaconda.storage_utils import size_from_input
 from pyanaconda.ui.helpers import InputCheck
 from pyanaconda.ui.gui import GUIObject
 from pyanaconda.ui.gui.helpers import GUIDialogInputCheckHandler
@@ -45,10 +48,13 @@ from blivet.formats import getFormat
 from blivet.devicefactory import SIZE_POLICY_AUTO
 from blivet.devicefactory import SIZE_POLICY_MAX
 from blivet.devicefactory import DEVICE_TYPE_LVM
-from blivet.devicefactory import DEVICE_TYPE_MD
 from blivet.devicefactory import DEVICE_TYPE_BTRFS
 from blivet.devicefactory import DEVICE_TYPE_LVM_THINP
+from blivet.devicefactory import DEVICE_TYPE_MD
+from blivet.devicefactory import get_supported_raid_levels
+from blivet.devicelibs import btrfs
 from blivet.devicelibs import mdraid
+from blivet.devicelibs import raid
 
 import logging
 log = logging.getLogger("anaconda")
@@ -138,8 +144,27 @@ def validate_mountpoint(mountpoint, used_mountpoints, strict=True):
     else:
         return ""
 
+def get_raid_level(device):
+    use_dev = device.raw_device
+
+    raid_level = None
+    if hasattr(use_dev, "level"):
+        raid_level = use_dev.level
+    elif hasattr(use_dev, "dataLevel"):
+        raid_level = use_dev.dataLevel
+    elif hasattr(use_dev, "volume"):
+        raid_level = use_dev.volume.dataLevel
+    elif hasattr(use_dev, "lvs") and len(use_dev.parents) == 1:
+        raid_level = get_raid_level(use_dev.parents[0])
+
+    return raid_level
+
 def selectedRaidLevel(raidLevelCombo):
-    """Interpret the selection of a RAID level combo box."""
+    """Interpret the selection of a RAID level combo box.
+
+       :returns: the selected raid level, None if none selected
+       :rtype: instance of blivet.devicelibs.raid.RaidLevel or NoneType
+    """
     if not raidLevelCombo.get_property("visible"):
         # the combo is hidden when raid level isn't applicable
         return None
@@ -154,7 +179,85 @@ def selectedRaidLevel(raidLevelCombo):
     if selected_level == "none":
         return None
     else:
-        return selected_level
+        return raid.getRaidLevel(selected_level)
+
+def raidLevelSelection(raid_level):
+    """ Returns a string corresponding to the RAID level.
+
+        :param raid_level: a raid level
+        :type raid_level: instance of blivet.devicelibs.raid.RAID or None
+        :returns: a string corresponding to this raid level
+        :rtype: str
+    """
+    return raid_level.name if raid_level else "none"
+
+def defaultRaidLevel(device_type):
+    """ Returns the default RAID level for this device type.
+
+        :param int device_type: an int representing the device_type
+        :returns: the default RAID level for this device type or None
+        :rtype: blivet.devicelibs.raid.RAIDLevel or NoneType
+    """
+    if device_type == DEVICE_TYPE_MD:
+        return mdraid.RAID_levels.raidLevel("raid1")
+
+    return None
+
+def defaultContainerRaidLevel(device_type):
+    """ Returns the default RAID level for this device type's container type.
+
+        :param int device_type: an int representing the device_type
+        :returns: the default RAID level for this device type's container or None
+        :rtype: blivet.devicelibs.raid.RAIDLevel or NoneType
+    """
+    if device_type == DEVICE_TYPE_BTRFS:
+        return btrfs.RAID_levels.raidLevel("single")
+
+    return None
+
+def requiresRaidSelection(device_type):
+    """ Whether GUI requires a RAID level be selected for this device type."""
+    return device_type == DEVICE_TYPE_MD
+
+def raidLevelsSupported(device_type):
+    """ The raid levels anaconda supports for this device type.
+
+        It supports any RAID levels that it expects to support and that blivet
+        supports for the given device type.
+
+        Since anaconda only ever allows the user to choose RAID levels for
+        device type DEVICE_TYPE_MD, hiding the RAID menu for all other device
+        types, the function only returns a non-empty set for this device type.
+        If this changes, then so should this function, but at this time it
+        is not clear what RAID levels should be offered for other device types.
+
+        :param int device_type: one of an enumeration of device types
+        :returns: a set of supported raid levels
+        :rtype: a set of instances of blivet.devicelibs.raid.RAIDLevel
+    """
+    if device_type == DEVICE_TYPE_MD:
+        supported = set(raid.RAIDLevels(["raid0", "raid1", "raid4", "raid5", "raid6", "raid10"]))
+    else:
+        supported = set()
+    return get_supported_raid_levels(device_type).intersection(supported)
+
+def containerRaidLevelsSupported(device_type):
+    """ The raid levels anaconda supports for a container for this
+        device_type.
+
+        For LVM, anaconda supports LVM on RAID, but also allows no RAID.
+
+        :param int device_type: one of an enumeration of device types
+        :returns: a set of supported raid levels
+        :rtype: a set of instances of blivet.devicelibs.raid.RAIDLevel
+    """
+    if device_type in (DEVICE_TYPE_LVM, DEVICE_TYPE_LVM_THINP):
+        supported = set(raid.RAIDLevels(["raid0", "raid1", "raid4", "raid5", "raid6", "raid10"]))
+        return get_supported_raid_levels(DEVICE_TYPE_MD).intersection(supported).union(set([None]))
+    elif device_type == DEVICE_TYPE_BTRFS:
+        supported = set(raid.RAIDLevels(["raid0", "raid1", "raid10", "single"]))
+        return get_supported_raid_levels(DEVICE_TYPE_BTRFS).intersection(supported)
+    return set()
 
 def get_container_type_name(device_type):
     return CONTAINER_TYPE_NAMES.get(device_type, _("container"))
@@ -429,10 +532,9 @@ class ContainerDialog(GUIObject, GUIDialogInputCheckHandler):
 
         raid_level = selectedRaidLevel(self._raidLevelCombo)
         if raid_level:
-            md_level = mdraid.getRaidLevel(raid_level)
-            min_disks = md_level.min_members
+            min_disks = raid_level.min_members
             if len(paths) < min_disks:
-                self._error = (_(RAID_NOT_ENOUGH_DISKS) % {"level" : md_level,
+                self._error = (_(RAID_NOT_ENOUGH_DISKS) % {"level" : raid_level,
                                                            "min" : min_disks,
                                                            "count" : len(paths)})
                 self._error_label.set_text(self._error)
@@ -484,32 +586,29 @@ class ContainerDialog(GUIObject, GUIDialogInputCheckHandler):
         else:
             self._sizeEntry.set_sensitive(True)
 
-    def _raid_level_visible(self, model, itr, user_data):
-        raid_level = model[itr][1]
 
-        # This is weird because for lvm's container-wide raid we use md.
-        if self.device_type in (DEVICE_TYPE_LVM, DEVICE_TYPE_LVM_THINP):
-            # no RAID is an option for LVM(ThP) as well
-            return (raid_level == "none" or
-                    raid_level in get_supported_raid_levels(DEVICE_TYPE_MD))
-        else:
-            return raid_level in get_supported_raid_levels(self.device_type)
+    def _raid_level_visible(self, model, itr, user_data):
+        raid_level_str = model[itr][1]
+        raid_level = raid.getRaidLevel(raid_level_str) if raid_level_str != "none" else None
+        return raid_level in containerRaidLevelsSupported(self.device_type)
 
     def _populate_raid(self):
-        """ Set up the raid-specific portion of the device details. """
-        if not get_supported_raid_levels(self.device_type):
-            # no supported raid levels for this device type
+        """ Set up the raid-specific portion of the device details.
+
+            Hide the RAID level menu if this device type does not support RAID.
+            Choose a default RAID level.
+        """
+        if not containerRaidLevelsSupported(self.device_type):
             map(really_hide, [self._raidLevelLabel, self._raidLevelCombo])
             return
 
-        raid_level = self.raid_level
-        if not raid_level or raid_level == "single":
-            raid_level = "none"
+        raid_level = self.raid_level or defaultContainerRaidLevel(self.device_type)
+        raid_level_name = raidLevelSelection(raid_level)
 
         # Set a default RAID level in the combo.
         for (i, row) in enumerate(self._raidLevelCombo.get_model()):
             log.debug("container dialog: raid level %s", row[1])
-            if row[1] == raid_level:
+            if row[1] == raid_level_name:
                 self._raidLevelCombo.set_active(i)
                 break
 
-- 
1.8.3.1



More information about the anaconda-patches mailing list