[PATCH] Count with the extra metadata extents for RAID consistently (#1065737)
Vratislav Podzimek
vpodzime at redhat.com
Mon Feb 24 16:26:34 UTC 2014
On Mon, 2014-02-24 at 09:05 -0600, David Lehman wrote:
> On Mon, 2014-02-24 at 10:41 +0100, Vratislav Podzimek wrote:
> > On Fri, 2014-02-21 at 08:55 -0500, Anne Mulhern wrote:
> > >
> > >
> > >
> > > ----- Original Message -----
> > > > From: "Vratislav Podzimek" <vpodzime at redhat.com>
> > > > To: anaconda-patches at lists.fedorahosted.org
> > > > Sent: Friday, February 21, 2014 5:23:43 AM
> > > > Subject: [PATCH] Count with the extra metadata extents for RAID consistently (#1065737)
> > > >
> > > > When creating an LVM setup on top of RAID, we add some extra extents to make
> > > > sure the metadata will fit in. However, when reporting free space in a VG or
> > > > when creating an LV in a VG the size of which was pushed to the limit, we
> > > > need
> > > > to take those extra metadata into account as well.
> > > >
> > > > Signed-off-by: Vratislav Podzimek <vpodzime at redhat.com>
> > > > ---
> > > > blivet/devicefactory.py | 9 ++++++++-
> > > > blivet/devices.py | 10 ++++++++++
> > > > 2 files changed, 18 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/blivet/devicefactory.py b/blivet/devicefactory.py
> > > > index 2d04005..5e0b7f1 100644
> > > > --- a/blivet/devicefactory.py
> > > > +++ b/blivet/devicefactory.py
> > > > @@ -1138,7 +1138,7 @@ class LVMFactory(DeviceFactory):
> > > > if self.container and self.container.exists:
> > > > return size
> > > >
> > > > - if self.container_size == 0:
> > > > + if self.container_size == SIZE_POLICY_AUTO:
> > > > # automatic container size management
> > > > if self.container:
> > > > size += sum([p.size for p in self.container.parents])
> > > > @@ -1213,6 +1213,13 @@ class LVMFactory(DeviceFactory):
> > > >
> > > > def _get_new_device(self, *args, **kwargs):
> > > > """ Create and return the factory device as a StorageDevice. """
> > > > +
> > > > + if self.container_raid_level and self.container_size in
> > > > [SIZE_POLICY_AUTO,
> > > > +
> > > > SIZE_POLICY_MAX]:
> > > > + # container pushed to the limit, but we need some extra space
> > > > for
> > > > + # metadata, so we need to make the LV smaller
> > > > + extra_md_space = LVM_PE_SIZE * len(self.disks) * 5
> > > > + kwargs["size"] -= extra_md_space
> > > > return self.storage.newLV(*args, **kwargs)
> > > >
> > > > def _set_name(self):
> > > > diff --git a/blivet/devices.py b/blivet/devices.py
> > > > index a607c1b..215b464 100644
> > > > --- a/blivet/devices.py
> > > > +++ b/blivet/devices.py
> > > > @@ -2424,9 +2424,19 @@ class LVMVolumeGroupDevice(DMDevice):
> > > > """ The amount of free space in this VG (in MB). """
> > > > # TODO: just ask lvm if isModified returns False
> > > >
> > > > + # get the number of disks used by PVs on RAID (if any)
> > > > + raid_disks = 0
> > > > + for pv in self.pvs:
> > > > + if isinstance(pv, MDRaidArrayDevice):
> > > > + raid_disks = max([raid_disks, len(pv.disks)])
> > >
> > > I'm puzzled by this chunk. It seems like it could be written equivalently as:
> > >
> > > raid_disks = max([0] + [len(pv.disks) for pv in self.pvs if isinstance(pv, MDRaidArrayDevice)])
> > That's true and it looks better to me, changing locally.
> >
> > >
> > > It puzzles me that you're not adding something somewhere, since there might be more than one MDRaidArrayDevice in the pvs and I would have thought that both should contribute to this quantity, somehow.
> > Well, yeah, it puzzles me as well. But the reason is that the
> > LVMFactory._get_total_space uses such algorithm. Here is the code
> > snippet:
> >
> >
> > > if self.container_raid_level:
> > > # add five extents per disk to account for md metadata
> > > # (it was originally one per disk but that wasn't enough for raid5)
> > > size += LVM_PE_SIZE * len(self.disks) * 5
> >
>
> This code's intention is to pad the disk space requirement by five
> extents (20 MiB) per disk (or MD member). It isn't at all scientific
> beyond observations that one extent per member was enough for RAID0 and
> RAID1 but not enough for RAID5.
I know. I just wanted to stick with the same behaviour in the two places
that both need to reflect this issue/feature.
> If you're willing to experiment, you
> could convert this to a value for the entire VG instead of per-disk.
Will try to do it, but not for RHEL 7.0.
--
Vratislav Podzimek
Anaconda Rider | Red Hat, Inc. | Brno - Czech Republic
More information about the anaconda-patches
mailing list