[master/rhel7] Don't add redundant grub installs if stage1 is not on a RAID

David Shea dshea at redhat.com
Thu Jan 16 19:48:02 UTC 2014


On 01/16/2014 02:25 PM, David Lehman wrote:
> On Thu, 2014-01-16 at 14:09 -0500, David Shea wrote:
>> This broke the case where stage1 is a partition such as prepboot or
>> biosboot that must be installed to a particular standard partition, and
>> /boot is on a RAID1. If we encounter such a case, assume that stage1
>> should be left as it is and let the RAID take care of the stage2
>> redundancy.
> Keep in mind that you're changing code that runs on many platforms to
> appease two vast-minority platforms. Please just take a long, hard look
> and be certain that you're not breaking any /boot-on-md cases for x86
> with this patch.
>
>> Resolves: rhbz#1035720
>> ---
>>   pyanaconda/bootloader.py | 38 ++++++++++++++++----------------------
>>   1 file changed, 16 insertions(+), 22 deletions(-)
>>
>> diff --git a/pyanaconda/bootloader.py b/pyanaconda/bootloader.py
>> index 4127dbf..2a628b2 100644
>> --- a/pyanaconda/bootloader.py
>> +++ b/pyanaconda/bootloader.py
>> @@ -1267,30 +1267,24 @@ class GRUB(BootLoader):
>>       def install_targets(self):
>>           """ List of (stage1, stage2) tuples representing install targets. """
>>           targets = []
>> +
>> +        # make sure we have stage1 and stage2 installed with redundancy
>> +        # so that boot can succeed even in the event of failure or removal
>> +        # of some of the disks containing the member partitions of the
>> +        # /boot array. If the stage1 is not a disk, it probably needs to
>> +        # be a partition on a particular disk (biosboot, prepboot), so only
>> +        # add the redundant targets if installing stage1 to a disk that is
>> +        # a member of the stage2 array.
>> +
>>           if self.stage2_device.type == "mdarray" and \
>> -           self.stage2_device.level == 1:
>> -            # make sure we have stage1 and stage2 installed with redundancy
>> -            # so that boot can succeed even in the event of failure or removal
>> -            # of some of the disks containing the member partitions of the
>> -            # /boot array
>> +           self.stage2_device.level == 1 and \
>> +           self.stage1_device.isDisk and \
>> +           self.stage2_device.dependsOn(self.stage1_device):
>>               for stage2dev in self.stage2_device.parents:
>> -                if self.stage1_device.isDisk:
>> -                    # install to mbr
>> -                    if self.stage2_device.dependsOn(self.stage1_device):
>> -                        # if target disk contains any of /boot array's member
>> -                        # partitions, set up stage1 on each member's disk
>> -                        # and stage2 on each member partition
>> -                        stage1dev = stage2dev.disk
>> -                    else:
>> -                        # if target disk does not contain any of /boot array's
>> -                        # member partitions, install stage1 to the target disk
>> -                        # and stage2 to each of the member partitions
>> -                        stage1dev = self.stage1_device
>> -                else:
>> -                    # target is /boot device and /boot is raid, so install
>> -                    # grub to each of /boot member partitions
>> -                    stage1dev = stage2dev
> I think we still support this case via kickstart, so we can't just drop
> the ability to install the bootloader to the first sector of the stage2
> device.

That would still work, it would just be using the md device as the 
target instead of doing separate grub2-installs to each component of the 
array. Does grub2 need the actual partition devices?

>
>> -
>> +                # if target disk contains any of /boot array's member
>> +                # partitions, set up stage1 on each member's disk
>> +                # and stage2 on each member partition
>> +                stage1dev = stage2dev.disk
> Only PartitionDevice has a "disk" attribute. Every StorageDevice has a
> "disks" attribute which is a list of the disks it occupies.

That would have been broken before, then, since I didn't add that; I 
just changed the indentation, and the post-patch code satisfies all the 
conditions that the pre-patch code did. Is there ever a case where a the 
parents of a MDRaidArrayDevice will contain members that are not 
PartitionDevice?




More information about the anaconda-patches mailing list