checksum
First, md-raid do have checksum and parity checks.
# echo check > /sys/block/md127/md/sync_action
# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdc1[0] sdb1[1]
1855337472 blocks super 1.2 [2/2] [UU]
[=================>...] check = 87.8% (1629808384/1855337472) finish=18.7min speed=200062K/sec
bitmap: 0/14 pages [0KB], 65536KB chunk
unused devices: <none>
#
So parity validation is not an advantage of either, as both have it.
What btrfs have is an advantage over plain ext4 et al. Because it stores duplicate values (profile: DUP) and can (very rarely) recover even with a single disk. But that is irrelevant for raid1 discussion.
Administration
PRO md-raid: mdadm have integrated monitoring. You can add EMAIL or PROGRAM to your mdadm.conf and have it email/trigger a script on any event. Disk failed? progress of rebuild? progress of check? result of important changes? etc.
It is also more explicit on how you assemble it. You must assemble the md device and then you mount whatever is in the md device. With btrfs it will create N blockdevs with the same UUID, and the first time you mount them you must pass mount -o device=/dev/other /dev/main /mnt/point
layout flexibility
Both can be created and new devices added on most combinations. Directly on entire disk (/dev/sda) without partitions is OK. Both can be under a partition (/dev/sda1). Both can be under LVM. Both can be under dm-crypt (cryptsetup, luks). Both can be under dm-parity (but you don't wait it). etc.
PRO md-raid: You can host block devices underneath it. for example, luks or other filesystems.
CON md-raid: it is close to impossible to resize the thing hosting it.
PRO btrfs: you don't have to deal with several layers, mostly because it is all btrfs. Easy to resize (grow) partitions.
CON btrfs: if you plan to use dm-crypt/luks, if must be over btrfs, and I have no idea what will happen if you have corruption. With md-raid you can have luks underneath, and then it is predictable that the corruption will be fixed.
Both are the same in terms of adding more devices and spares.
I won't mention snapshots etc because you can use brtfs under md-raid and also have it.
RAID1 profiles
md-raid is only full copy. It might be better if you care for full reliability. It might be worse if you care for performance.
brtfs guarantee 2 copies at minimum, but not 1 copy on all devices!
For example, if you have three disks, md-raid will have three copies of the data. btrfs will still have only two.
you can force more copies if you have 3 or more devices using the new (untested?) raid1c3 and raid1c4 profiles.
I won't mark which one have a pro, since md-raid is a PRO for redundancy and btrfs is a PRO for performance.
rough edges
mdadm et al will have very weird commands and syntax. But it is generally considered stable.
btrfs as everyone mentioned, even in 2025 have some rough edges: for example, you get some scary messages from the tooling:
# btrfs filesystem resize max /mnt/point
Resize device id 1 (/dev/sdb2) from 3.64TiB to max
WARNING: the new size 0 (0.00B) is < 256MiB, this may be rejected by kernel
which is some weird logic handling the word "max" but if you look at the logs it is mostly fine:
BTRFS info (device sdb2): resize device /dev/sdb2 (devid 1) from 4000000770048 to 8100829442048
(Also, keep in mind you must expand each "device" in the RAID, so run for each one btrfs filesystem resize 2:max /mnt/point increasing the number before :max... and afterwards, remember to btrfs rebalance /mnt/point)
So with mdadm you will be constantly looking up commands and conf file syntax. With btrfs you will be constantly looking up error messages to see if they are relevant :)