Comment 8 for bug 1849682

Revision history for this message
dann frazier (dannf) wrote :

= Verification =
I see 2 pieces to this:
 1) The original report, in Comment #1, where the offending patch caused an issue on a system where it shouldn't have - i.e., a raid0 w/ homogenous member sizes. We were never able to reproduce this in subsequent tests w/ the patch applied. I know sfeole was able to perform the same MAAS install/upgrade w/ the current -proposed kernel (I saw a test report from it), so I think we can confidently say it is not reproducible in that build either.

 2) In configs where this patch *should* prevent a raid0 from assembling (heterogenous sizes), I've verified that if I create such an array on an older kernel, then upgrade to the current -proposed kernel, it starts automatically. Now, of course, I continue to be susceptible to corruption, but that's known and tracked in bug 1850540.

$ cat /proc/version
Linux version 4.15.0-66-generic (buildd@lgw01-amd64-044) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019
$ sudo mdadm --create /dev/md0 --run --metadata=default --homehost=akis --level=0 --raid-devices=2 /dev/vdb1 /dev/vdc1
mdadm: /dev/vdb1 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Thu Oct 31 21:53:40 2019
mdadm: /dev/vdc1 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Thu Oct 31 21:53:40 2019
mdadm: array /dev/md0 started.
$ sudo reboot

$ cat /proc/version
Linux version 4.15.0-68-generic (buildd@lgw01-amd64-037) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #77-Ubuntu SMP Sun Oct 27 06:02:23 UTC 2019
$ cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid0 vdc1[1] vdb1[0]
      1567744 blocks super 1.2 512k chunks

unused devices: <none>