Hi, I have a 3 disk RAID 5 where one disk broke (/dev/sdb). /dev/sda1 has /boot, /dev/sda2 is a RAID partition, /dev/sdb1 is a RAID partition, /dev/sdc1 is a RAID partition. In the process of moving disks around to recover two unfortunate things occurred. The first is that my system stopped recognizing the second disk (the one that had no /boot on it) which I have subsequently added back into the array, and the second is that my system in emergency mode doesn't have cryptsetup for opening the luks container and will not recognize my USB ports. How do I get the system to open my disk up as before?

Attempting to boot gives, via journalctl:
"systemd-udevd[434]: Process '/sbin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1." and
"systemd-udevd[426]: Process '/sbin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1." and
"systemd-udevd[434]: inotify_add_watch(8, /dev/sdc1, 10) failed: No such file or directory" and
"systemd-udevd[426]: Process '/sbin/mdadm  -I /dev/sdb' failed with exit code 2." and
"systemd-udevd[434]: inotify_add_watch(8, /dev/sdb1, 10) failed: No such file or directory"

Apparently the boot sequence is pre-emptively deleting sdb and sdc partitions from /dev. When I execute:
"partx -a --nr 1-1024 /dev/sdb" and "partx -a --nr 1-1024 /deb/sdc" it re-creates their first partitions device nodes.
Then I edit mdadm.conf (which is apparently automatically written by anaconda each time I try to boot) to read:
"DEVICE /dev/sda2 /dev/sdb1 /dev/sdc1
MAILADDR root
HOMEHOST <system>
ARRAY /dev/md127 level=5 devices=/dev/sda2,/dev/sdb1,/dev/sdc1 metadata=1.2 UUID=f6224251:0ba59f55:05d9cdf4:98e79eca"
so I can execute:
"mdadm --stop /dev/md127" to stop the array with 1 disk and
"mdadm --assemble /dev/md127" to start the array with all 3 disks

RAID is recognized but now I'm stuck. What next for opening luks and then lvm? Thanks.