[master 1/1] Do not reserve extra space for metadata in a VG with RAID PVs

vpodzime installerbot-noreply at redhat.com
Mon Aug 31 13:53:19 UTC 2015


From: Vratislav Podzimek <vpodzime at redhat.com>

Reserving 5 extents per disk per LV for metadata if a VG is on top of RAID PVs
is a weird magic that saved us troubles with placing LVM on top of RAID, but
even on the first it's just wrong and doesn't make sense. LVM doesn't allocate
any extra meta data if an LV is added to a VG (unless it is a meta data LV of
course) and the RAID device below it (the PV) is just a block device with some
size and no matter how many disks are below, the meta data space used by LVM is
just subtracted from the RAID device's (PV's) size. Moreover the LVM extent size
has nothing to do with the RAID device so it's like mixing apples and oranges.

If things don't fit we need to tweak our calculations of free space/size in/of
an MD RAID device (depending on the level and underlying disks) since that's the
only place where these tweaks really make sense. However, after quite a lot of
testing it seems that we are reserving more than enough space for the meta data.
---
 blivet/devicefactory.py | 5 -----
 blivet/devices/lvm.py   | 5 -----
 blivet/partitioning.py  | 7 -------
 3 files changed, 17 deletions(-)

diff --git a/blivet/devicefactory.py b/blivet/devicefactory.py
index 2be78da..0c795be 100644
--- a/blivet/devicefactory.py
+++ b/blivet/devicefactory.py
@@ -1258,11 +1258,6 @@ def _get_total_space(self):
                 size -= blockdev.lvm.get_lv_physical_size(self.device.size, lvm.LVM_PE_SIZE)
                 log.debug("size cut to %s to omit old device space", size)
 
-        if self.container_raid_level:
-            # add five extents per disk to account for md metadata
-            # (it was originally one per disk but that wasn't enough for raid5)
-            size += lvm.LVM_PE_SIZE * len(self.disks) * 5
-
         if self.container_encrypted:
             # Add space for LUKS metadata, each parent will be encrypted
             size += lvm.LVM_PE_SIZE * len(self.disks)
diff --git a/blivet/devices/lvm.py b/blivet/devices/lvm.py
index 0bc420a..04ac753 100644
--- a/blivet/devices/lvm.py
+++ b/blivet/devices/lvm.py
@@ -381,11 +381,6 @@ def freeSpace(self):
         # total the sizes of any LVs
         log.debug("%s size is %s", self.name, self.size)
         used = sum((lv.vgSpaceUsed for lv in self.lvs), Size(0))
-        if not self.exists and raid_disks:
-            # (only) we allocate (5 * num_disks) extra extents for LV metadata
-            # on RAID (see the devicefactory.LVMFactory._get_total_space method)
-            new_lvs = [lv for lv in self.lvs if not lv.exists]
-            used += len(new_lvs) * 5 * raid_disks * self.peSize
         used += self.reservedSpace
         free = self.size - used
         log.debug("vg %s has %s free", self.name, free)
diff --git a/blivet/partitioning.py b/blivet/partitioning.py
index cd42130..6505825 100644
--- a/blivet/partitioning.py
+++ b/blivet/partitioning.py
@@ -1413,13 +1413,6 @@ def addRequest(self, req):
             raise ValueError(_("VGChunk requests must be of type "
                              "LVRequest"))
 
-        # (only) we allocate (5 * num_disks) extra extents for LV metadata
-        # on RAID (see the devicefactory.LVMFactory._get_total_space method)
-        if not req.device.exists and req.device.vg.pvs:
-            max_raid_disks = max(len(pv.disks) for pv in req.device.vg.pvs)
-            if max_raid_disks > 1:
-                self.pool -= 5 * max_raid_disks
-
         super(VGChunk, self).addRequest(req)
 
     def lengthToSize(self, length):


-- 
To view this commit on github, visit https://github.com/rhinstaller/blivet/commit/cfad323514a42fffccc54a60c92fede4c8368679


More information about the anaconda-patches mailing list