I've got a two disk raid and I want to it and convert the disks to some non-raid usage. The raid was never used and has no data. Also, the raid disks are encrypted along with the rest of the disk in the machine.
When I try to stop raid I get the following:
# mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?
Googling about this there are a number of folks hitting this problem over a number of fedora releases. Sadly, I could not find any fix. For some the solution (nuclear solutions) was to use a different linux distro or reinstall. I've got a lot of data on other disks on this system and really don't want to use a nuclear solution.
Background information:
$ uname -a Linux medusa 3.17.7-200.fc20.x86_64 #1 SMP Wed Dec 17 03:35:33 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
MDADM # mdadm --misc --detail /dev/md127 Version : 1.2 Creation Time : Fri Apr 25 17:19:52 2014 Raid Level : raid1 Array Size : 3906885632 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Feb 1 09:37:32 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Name : localhost.localdomain:raid1 UUID : be78cb55:f26bbf2b:7ad9b344:a2427875 Events : 8527
Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1
MDADM.CONF # cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/raid1 level=raid1 num-devices=2 UUID=be78cb55:f26bbf2b:7ad9b344:a2427875
LSOF # lsof | grep md127 md127_rai 1044 root cwd DIR 253,1 4096 2 / md127_rai 1044 root rtd DIR 253,1 4096 2 / md127_rai 1044 root txt unknown /proc/1044/exe udisksd 2120 root 13r REG 0,15 4096 30332 /sys/devices/virtual/block/md127/md/sync_action udisksd 2120 root 14r REG 0,15 4096 30345 /sys/devices/virtual/block/md127/md/degraded gmain 2120 2121 root 13r REG 0,15 4096 30332 /sys/devices/virtual/block/md127/md/sync_action gmain 2120 2121 root 14r REG 0,15 4096 30345 /sys/devices/virtual/block/md127/md/degraded gdbus 2120 2123 root 13r REG 0,15 4096 30332 /sys/devices/virtual/block/md127/md/sync_action gdbus 2120 2123 root 14r REG 0,15 4096 30345 /sys/devices/virtual/block/md127/md/degraded probing-t 2120 2124 root 13r REG 0,15 4096 30332 /sys/devices/virtual/block/md127/md/sync_action probing-t 2120 2124 root 14r REG 0,15 4096 30345 /sys/devices/virtual/block/md127/md/degraded cleanup 2120 2125 root 13r REG 0,15 4096 30332 /sys/devices/virtual/block/md127/md/sync_action cleanup 2120 2125 root 14r REG 0,15 4096 30345 /sys/devices/virtual/block/md127/md/degraded
FSTAB ENTRY /dev/mapper/luks-ab8911f9-b3a4-4f33-8b10-7e52c7217ae9 /raid1 ext4 defaults,x-systemd.device-timeout=0 1 2
BY-UUID # ls -l /dev/disk/by-uuid/ | grep md127 lrwxrwxrwx. 1 root root 11 Jan 10 19:32 ab8911f9-b3a4-4f33-8b10-7e52c7217ae9 -> ../../md127
Thanks Richard
What do you get for:
dmsetup ls cryptsetup status
I see this:
/dev/mapper/luks-ab8911f9-b3a4-4f33-8b10-7e52c7217ae9 /raid1
This suggests an encrypted device mapper device created from the raid1 device, and it's active. So that has to be closed, or wipefs before you can stop the raid and then wipe the raid, i.e. you need to tear it down in the reverse order it was created.
You can try this: DOUBLE CHECK DEVICE VALUES wipefs --backup -a /dev/md127 mdadm -S /dev/md127 wipefs --backup -a /dev/sd[cd]1
That will backup the luks signature, then wipe it, stop the array, then backup and wipe the mdadm superblock signature. That ought to fix the problem unless there's also LVM stuff going on here too.
The --backup isn't strictly necessary but will write out a file and instructions on how to restore what it wipes in case you get the command wrong and wipe the wrong device.