[PATCH 2/3] Unset raid level before manipulating member device set. (#1148373)

Vratislav Podzimek vpodzime at redhat.com
Fri Oct 3 06:38:53 UTC 2014


On Thu, 2014-10-02 at 14:46 -0500, David Lehman wrote:
> The devices enforce restrictions on member set size based on raid
> level, so we unset the level before adjust the member set. We always
> set the raid level explicitly afterwards via either
> _reconfigure_container or _reconfigure_device. If the device is not
> already defined the new method is a no-op.
> ---
>  blivet/devicefactory.py | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/blivet/devicefactory.py b/blivet/devicefactory.py
> index 41bafe8..7ea0fb4 100644
> --- a/blivet/devicefactory.py
> +++ b/blivet/devicefactory.py
> @@ -339,6 +339,11 @@ class DeviceFactory(object):
>              self._container_raid_level = None
>          else:
>              self._container_raid_level = raid.getRaidLevel(value)
> +
> +    def _unset_raid_levels(self):
> +        """ Unset raid level for container and device. """
> +        pass
> +
>      #
>      # methods related to device size and disk space requirements
>      #
> @@ -758,6 +763,8 @@ class DeviceFactory(object):
>          self._handle_no_size()
>          self._set_up_child_factory()
>  
> +        self._unset_raid_levels()
> +
>          # Configure any devices this device will use as building blocks, except
>          # for type-specific container devices. In the LVM example, this will
>          # configure the PVs.
> @@ -1261,6 +1268,7 @@ class LVMFactory(DeviceFactory):
>      def _configure(self):
>          self._set_container()
>          if self.container and not self.container.exists:
> +            self._unset_raid_levels()
>              # If there's already a VG associated with this LV that doesn't have
>              # MD PVs we need to remove the partition PVs.
>              # Likewise, if there's already a VG whose PV is an MD we need to
> @@ -1560,6 +1568,10 @@ class MDFactory(DeviceFactory):
>  
>          # adjust the bitmap setting
>  
> +    def _unset_raid_levels(self):
> +        if not getattr(self.device, "exists", True):
> +            self.device.level = "linear"
> +
>      def _get_new_device(self, *args, **kwargs):
>          """ Create and return the factory device as a StorageDevice. """
>          kwargs["level"] = self.raid_level
> @@ -1658,6 +1670,10 @@ class BTRFSFactory(DeviceFactory):
>          # set the new level
>          self.container.dataLevel = self.container_raid_level
>  
> +    def _unset_raid_levels(self):
> +        if not getattr(self.container, "exists", True):
> +            self.container.dataLevel = "single"
Does this really help? I may be confused, but the bug report's traceback
was triggered by the 'single' dataLevel of BTRFS which has a restriction
of at least 1 member which has been removed when adding a LUKS layer. Is
this bypassed somewhere else?

-- 
Vratislav Podzimek

Anaconda Rider | Red Hat, Inc. | Brno - Czech Republic



More information about the anaconda-patches mailing list