On 24 Dec 2021 at 12:11, George N. White III wrote:
From: "George N. White III" <gnwiii(a)gmail.com>
Date sent: Fri, 24 Dec 2021 12:11:03 -0400
Subject: Re: The definitive guide to replacing a disk in raid1?
To: Community support for Fedora users <users(a)lists.fedoraproject.org>
Send reply to: Community support for Fedora users <users(a)lists.fedoraproject.org>
On Fri, 24 Dec 2021 at 10:50, Tom Horsley <horsley1953(a)gmail.com> wrote:
On Fri, 24 Dec 2021 15:38:05 +0100
cen wrote:
> I recently had to replace a bad disk in raid1 array and
finding proper
> docs was not a good experience.
I've always noticed that about raid in general. Thousands
of internet
pages telling you how redundant arrays protect you from
disk failures
and you ought to use them. Nothing at all saying what
you do when
one of those disks fail :-).
Though I have heard the claim that all you have to do is
swap in
a new blank disk and power up the system and magic
happens.
That was certainly the case for external RAID boxes. Some could
hot-swap a failed drive. Back when a 10G SCSI drive was a big as
you could get, we used an external RAID with two live spares.
When a drive failed, a spare came up without manual intervention.
You could then replace the failed drive while the RAID was powered
up. The only failure I recall was when the host system SCSI controller
died.
Remember way way back. Had a Novell 4.x server that
had a duplex raid (mirror) system with 2 1G System disk
and 4 - 9G (At time largest disk). Had two SCSI
controllers with one of each drives in mirror to allow for
disk or controller failures. Was a cool setup, but in
pre-testing found an issue. The Primary System disk had
a small DOS boot partition that was not mirrored to the
other System disk. So if your primary system disk would
fail the computer would continue to run off the
secondary, but if you rebooted it would fail. Was an easy
process to manually copy the partition to the other disk,
and then one could swap positions. Otherwise it was
great. Ran 10 computer labs off a Novell 4.x server with
350Mhz AMDK6-2 with 6 SCSI disks with 192M of Ram..
Never had a drive fail over many years.
Did have our Admin have a System that was running SCO
and it had a failure in a 3 disk system. It kept working
fine after failure, but when they replace the bad drive the
rebuild that was suppose to be automatic failed. Some
how it mixed up which drive to rebuild and corrupted the
whole raid. (Could have been operator error, but that is
what they said happened?). Fortunately they did have a
back right before they did the replacement, so just a
couple days of down time. But know many never test that
the system works or how to do the restores, and just hope
it works. I like to also do bare metal backups just to be
sure. Running one right now on my only windows 10
machines to one of my 5 Fedora machines.
Best of luck to everyone. Unfortuantely, drives fail, and
often don't give warnings. But prices are less expensive.
Recall the 9G SCSI drives where like $450 each. My first
computer had an option for a 20M Seagate ST-225, but it
was a $2,000 option in 1983, so got my Heathkit H-120
Dual CPU system 8088/8080 with dual 320K floppies and
768K of Ram (more ram than IBM PC). Loved that
machine 8Mhz CPU.
Merry Christmas to those that celebrate it, and Best
Wishes to others.
--
George N. White III