On Thu, Apr 7, 2016, 5:01 PM Jeffrey Ross <jeff(a)bubble.org> wrote:
I had an raid1 partition with ext4 on it which was empty.
it was /dev/md124 which was made up of /dev/sda1 and /dev/sdb1, I decided
to change this to be a btrfs partition.
Initially I figured I'd simply unmount /dev/md124 and simply do a
"mkfs.btrfs -f -L home2 /dev/md124" and remount the partition, however
after doing some reading I believe btrfs supports raid1 directly without
using the software raid driver md (?).
Correct.
so I then tried "mkfs.btrfs -m raid1 -d raid1 -L home2 -f /dev/sdb1
/dev/sda1" and something was successfully created but this is where I am
confused, is this truly a raid1 partition?
Yes.
It is not block level raid1, however. It's done at the Btrfs chunk
level, where each chunk type will have two stripes (copies). The btrfs
chunk and stripe are reused terms that don't match prior mdadmn usage.
The distinction isn't important day to day, but if you want to
understand how btrfs does things differently is worth noting.
secondly mounting this
partition I simply specify something like "mount /dev/sda1 /mntpoint".
Correct. Or /dev/sdb1 also works. Each device superblock contains a
reference to all other devices used in a multiple device Btrfs volume,
and mount will fail if any are missing. Strictly speaking the member
devices aren't mounted, rather the volume is what's mounted. Be aware
of the duplicate volume UUID gotcha (using dd or lvm snapshots):
https://btrfs.wiki.kernel.org/index.php/Gotchas
I can see -
# btrfs filesystem show
Label: 'home2' uuid: 635be1e8-31d2-4b5c-b81c-1ec2cd8d9101
Total devices 2 FS bytes used 664.00KiB
devid 1 size 376.46GiB used 2.01GiB path /dev/sdb1
devid 2 size 376.46GiB used 2.01GiB path /dev/sda1
To me this looks like two different partitions but I maybe wrong, so
assuming it is one raid partition how would I go around having this auto
mounted in /etc/fstab? I would assume this entry -
UUID=635be1e8-31d2-4b5c-b81c-1ec2cd8d9101 /home2 btrfs defaults 0 0
Correct.
but I'm looking for some confirmation first that I actually have a raid
partition and I've done everything correctly first.
See 'btrfs filesystem df' and 'btrfs filesystem usage'
The 'usage' subcommand is probably more useful, it's newer and was
meant to resolve the confusion with show and df, as well as combined
their information. show and df are almost considered legacy, but will
continue to be around for a while.
# btrfs fi df /mnt/1
Data, RAID1: total=1.00GiB, used=457.50MiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=592.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B
# btrfs fi us /mnt/1
Overall:
Device size: 100.00GiB
Device allocated: 4.06GiB
Device unallocated: 95.94GiB
Device missing: 0.00B
Used: 916.19MiB
Free (estimated): 48.52GiB (min: 48.52GiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 16.00MiB (used: 0.00B)
Data,RAID1: Size:1.00GiB, Used:457.50MiB
/dev/dm-7 1.00GiB
/dev/mapper/VG-testbtr1 1.00GiB
Metadata,RAID1: Size:1.00GiB, Used:592.00KiB
/dev/dm-7 1.00GiB
/dev/mapper/VG-testbtr1 1.00GiB
System,RAID1: Size:32.00MiB, Used:16.00KiB
/dev/dm-7 32.00MiB
/dev/mapper/VG-testbtr1 32.00MiB
Unallocated:
/dev/dm-7 47.97GiB
/dev/mapper/VG-testbtr1 47.97GiB
Lastly if I'm reading correctly the system will NOT automatically mount a
degraded array, how can I force it to automatically mount on a reload even
if the array is degraded.
Device failure notification and recovery is distinctly weak and manual
on Btrfs right now. There are some patches available upstream that are
being tested, but I don't expect them to get into kernel 4.6.
You could use the 'degraded' mount option in fstab, but then that
shows as a mount option and kernel messages, the same as if you're
really degraded. So you have no idea if you're really degraded or not
unless you check 'btrfs fi show' or 'btrfs fi us'.
You really need to know if you've mounted degraded because there's no
automatic rebuilds either. So if devid1 mounts degraded silently and
devid2 appears later, well it's not going to get added back in
automatically. Even more neat, is if you're making changes to devid1
and not devid2, then later they get mounted together, devid2 is not
automatically caught up with devid1. You have to scrub for it to catch
up. If you don't scrub and then later let's say devid1 is late and
doesn't get mounted, but devid2 mounts and you write to it, now you
have different writes to devid1 and devid2 and if they're later
combined again, the file system will totally face plant, it's
generally complete corruption.
So anyway, I don't recommend using degraded in fstab. Instead use
nofail, so the startup isn't left hanging while waiting for it to
mount. And then you'll figure out something is wrong since your stuff
isn't where you expect it at /home2, and you can investigate the
problem and make sure if you have to mount degraded that if you later
get the missing device reattached or whatever the problem was, you can
then scrub to make sure they're both on the same page.
Chris Murphy