Multipath not working on Bionic with Netapp DS14 mk 2 fibrechannel storage

Asked by mr_willem

Dear Ubuntu Developers

I have a Netapp DS14 mk2 FibreChannel Storage connected with a Emulex Light Pulse Adapter. The SAN is connected with two cables. On ubuntu 16.04 I had multipath working with the hardware.

Within these SAN I have 12 Netapp FC Disks.
I use the SAN in a way that the 11 disks build a raid 6 Array and on this array I have a big LVM Physical volume.

In a earlier Version of ubuntu (16.04) there has always been the problem that multipathing was not enabled by default.
I thought it was because mdadm was started before multipath and so exclusive access to the disks was not possible anymore when multipath was run.
My solution for this was to disable the lvm
lvchange -a n fcbackup
then remove the raid array
mdadm --stop /dev/md127
and then restart the multipath daemon
/etc/init.d/multipath-tools restart

Afterwards I could assemble the raid Array with the mapth* devices under /dev/mapper/

Now I had to upgrade to 18.04 because my system disk died
and even the manual multipathing solution does not work any longer.

multipath -v 4 finds only one multipath disks It is called 3

Can someone tell how to further investigate this problem.
And finally, would it be possible to get the raid automatically configured while booting and not to manually enable multipath after the system comes up.

below you will find some debug and config information

Thank you very much,
Willem Bleymueller

Here is what multipath -v4 output looks like. (Raid was deactivated before)
Jul 31 11:27:31 | Discover device /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-0/target9:0:0/9:0:0:0/block/sdb
Jul 31 11:27:31 | sdb: dev not found in pathvec
Jul 31 11:27:31 | 8:16: dev_t not found in pathvec
Jul 31 11:27:31 | sdb: udev property ID_WWN whitelisted
Jul 31 11:27:31 | sdb: mask = 0x1f
Jul 31 11:27:31 | sdb: dev_t = 8:16
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-0/target9:0:0/9:0:0:0/block/sdb/size'
Jul 31 11:27:31 | sdb: size = 1172123568
Jul 31 11:27:31 | sdb: vendor = NETAPP
Jul 31 11:27:31 | sdb: product = X292_HVIPC560F15
Jul 31 11:27:31 | sdb: rev = NA04
Jul 31 11:27:31 | sdb: h:b:t:l = 9:0:0:0
Jul 31 11:27:31 | SCSI target 9:0:0 -> FC rport 9:0-0
Jul 31 11:27:31 | sdb: tgt_node_name = 0x201e000cca6fd4fc
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-0/target9:0:0/9:0:0:0/state'
Jul 31 11:27:31 | sdb: path state = running
Jul 31 11:27:31 | sdb: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-0/target9:0:0/9:0:0:0/vpd_pg80'
Jul 31 11:27:31 | sdb: serial = J9WZHM0L
Jul 31 11:27:31 | sdb: get_state
Jul 31 11:27:31 | sdb: detect_checker = yes (setting: multipath internal)
Jul 31 11:27:31 | failed to issue vpd inquiry for pgc9
Jul 31 11:27:31 | sdb: path_checker = tur (setting: multipath internal)
Jul 31 11:27:31 | sdb: checker timeout = 30 s (setting: multipath internal)
Jul 31 11:27:31 | sdb: tur state = up
Jul 31 11:27:31 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
Jul 31 11:27:31 | sdb: uid = 3 (udev)
Jul 31 11:27:31 | sdb: detect_prio = yes (setting: multipath internal)
Jul 31 11:27:31 | sdb: prio = const (setting: multipath internal)
Jul 31 11:27:31 | sdb: prio args = "" (setting: multipath internal)
Jul 31 11:27:31 | sdb: const prio = 1
Jul 31 11:27:31 | Discover device /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-0/target9:0:0/9:0:0:0/block/sdb/sdb1
Jul 31 11:27:31 | Discover device /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-1/target9:0:1/9:0:1:0/block/sdd
Jul 31 11:27:31 | sdd: dev not found in pathvec
Jul 31 11:27:31 | 8:48: dev_t not found in pathvec
Jul 31 11:27:31 | sdd: udev property ID_WWN whitelisted
Jul 31 11:27:31 | sdd: mask = 0x1f
Jul 31 11:27:31 | sdd: dev_t = 8:48
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-1/target9:0:1/9:0:1:0/block/sdd/size'
Jul 31 11:27:31 | sdd: size = 1172123568
Jul 31 11:27:31 | sdd: vendor = NETAPP
Jul 31 11:27:31 | sdd: product = X292_HVIPC560F15
Jul 31 11:27:31 | sdd: rev = NA04
Jul 31 11:27:31 | sdd: h:b:t:l = 9:0:1:0
Jul 31 11:27:31 | SCSI target 9:0:1 -> FC rport 9:0-1
Jul 31 11:27:31 | sdd: tgt_node_name = 0x201e000cca75d734
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-1/target9:0:1/9:0:1:0/state'
Jul 31 11:27:31 | sdd: path state = running
Jul 31 11:27:31 | sdd: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-1/target9:0:1/9:0:1:0/vpd_pg80'
Jul 31 11:27:31 | sdd: serial = J9X2U1PL
Jul 31 11:27:31 | sdd: get_state
Jul 31 11:27:31 | sdd: detect_checker = yes (setting: multipath internal)
Jul 31 11:27:31 | failed to issue vpd inquiry for pgc9
Jul 31 11:27:31 | sdd: path_checker = tur (setting: multipath internal)
Jul 31 11:27:31 | sdd: checker timeout = 30 s (setting: multipath internal)
Jul 31 11:27:31 | sdd: tur state = up
Jul 31 11:27:31 | sdd: uid_attribute = ID_SERIAL (setting: multipath internal)
Jul 31 11:27:31 | sdd: uid = 3 (udev)
Jul 31 11:27:31 | sdd: detect_prio = yes (setting: multipath internal)
Jul 31 11:27:31 | sdd: prio = const (setting: multipath internal)
Jul 31 11:27:31 | sdd: const prio = 1
Jul 31 11:27:31 | Discover device /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-1/target9:0:1/9:0:1:0/block/sdd/sdd1
Jul 31 11:27:31 | Discover device /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-10/target9:0:10/9:0:10:0/block/sdw
Jul 31 11:27:31 | sdw: dev not found in pathvec
Jul 31 11:27:31 | 65:96: dev_t not found in pathvec
Jul 31 11:27:31 | sdw: udev property ID_WWN whitelisted
Jul 31 11:27:31 | sdw: mask = 0x1f
Jul 31 11:27:31 | sdw: dev_t = 65:96
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-10/target9:0:10/9:0:10:0/block/sdw/size'
Jul 31 11:27:31 | sdw: size = 1172123568
Jul 31 11:27:31 | sdw: vendor = NETAPP
Jul 31 11:27:31 | sdw: product = X292_S15K7560F15
Jul 31 11:27:31 | sdw: rev = NA08
Jul 31 11:27:31 | sdw: h:b:t:l = 9:0:10:0
Jul 31 11:27:31 | SCSI target 9:0:10 -> FC rport 9:0-10
Jul 31 11:27:31 | sdw: tgt_node_name = 0x2000b45253c4c0f8
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-10/target9:0:10/9:0:10:0/state'
Jul 31 11:27:31 | sdw: path state = running
Jul 31 11:27:31 | sdw: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jul 31 11:27:31 | open '/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-10/target9:0:10/9:0:10:0/vpd_pg80'
Jul 31 11:27:31 | sdw: serial = 6SL9A0EH0000N511171T
Jul 31 11:27:31 | sdw: get_state
Jul 31 11:27:31 | sdw: detect_checker = yes (setting: multipath internal)
Jul 31 11:27:31 | failed to issue vpd inquiry for pgc9
Jul 31 11:27:31 | sdw: path_checker = tur (setting: multipath internal)
Jul 31 11:27:31 | sdw: checker timeout = 30 s (setting: multipath internal)
Jul 31 11:27:31 | sdw: tur state = up
Jul 31 11:27:31 | sdw: uid_attribute = ID_SERIAL (setting: multipath internal)
Jul 31 11:27:31 | sdw: uid = 3 (udev)
Jul 31 11:27:31 | sdw: detect_prio = yes (setting: multipath internal)
Jul 31 11:27:31 | sdw: prio = const (setting: multipath internal)
Jul 31 11:27:31 | sdw: prio args = "" (setting: multipath internal)
Jul 31 11:27:31 | sdw: const prio = 1
Jul 31 11:27:31 | Discover device /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/host9/rport-9:0-10/target9:0:10/9:0:10:0/block/sdw/sdw1

...
This is reported for every disk. I think the following table is a little suspicious. uuid is reported to be 3 for all drives

....

===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st v
Hitachi_HTS541612J9SA00_SB2D41E4K4230E 6:0:0:0 sda 8:0 1 undef undef A
3 9:0:0:0 sdb 8:16 1 undef undef N
3 9:0:1:0 sdd 8:48 1 undef undef N
3 9:0:10:0 sdw 65:96 1 undef undef N
3 9:0:11:0 sdx 65:112 1 undef undef N
3 9:0:2:0 sdf 8:80 1 undef undef N
3 9:0:3:0 sdh 8:112 1 undef undef N
3 9:0:4:0 sdj 8:144 1 undef undef N
3 9:0:5:0 sdn 8:208 1 undef undef N
3 9:0:6:0 sdp 8:240 1 undef undef N
3 9:0:7:0 sdq 65:0 1 undef undef N
3 9:0:8:0 sds 65:32 1 undef undef N
3 9:0:9:0 sdu 65:64 1 undef undef N
3 10:0:0:0 sdc 8:32 1 undef undef N
3 10:0:1:0 sde 8:64 1 undef undef N
3 10:0:10:0 sdv 65:80 1 undef undef N
3 10:0:11:0 sdy 65:128 1 undef undef N
3 10:0:2:0 sdg 8:96 1 undef undef N
3 10:0:3:0 sdi 8:128 1 undef undef N
3 10:0:4:0 sdk 8:160 1 undef undef N
3 10:0:5:0 sdl 8:176 1 undef undef N
3 10:0:6:0 sdm 8:192 1 undef undef N
3 10:0:7:0 sdo 8:224 1 undef undef N
3 10:0:8:0 sdr 65:16 1 undef undef N
3 10:0:9:0 sdt 65:48 1 undef undef N
Jul 31 11:27:31 | libdevmapper version 1.02.145 (2017-11-03)
Jul 31 11:27:31 | DM multipath kernel driver v1.13.0
Jul 31 11:27:31 | params = 0 0 24 1 service-time 0 1 2 8:16 1 1 service-time 0 1 2 8:48 1 1 service-time 0 1 2 65:96 1 1 service-time 0 1 2 65:112 1 1 service-time 0 1 2 8:80 1
 1 service-time 0 1 2 8:112 1 1 service-time 0 1 2 8:144 1 1 service-time 0 1 2 8:208 1 1 service-time 0 1 2 8:240 1 1 service-time 0 1 2 65:0 1 1 service-time 0 1 2 65:32 1 1
service-time 0 1 2 65:64 1 1 service-time 0 1 2 8:32 1 1 service-time 0 1 2 8:64 1 1 service-time 0 1 2 65:80 1 1 service-time 0 1 2 65:128 1 1 service-time 0 1 2 8:96 1 1 serv
ice-time 0 1 2 8:128 1 1 service-time 0 1 2 8:160 1 1 service-time 0 1 2 8:176 1 1 service-time 0 1 2 8:192 1 1 service-time 0 1 2 8:224 1 1 service-time 0 1 2 65:16 1 1 servic
e-time 0 1 2 65:48 1 1
Jul 31 11:27:31 | status = 2 0 0 0 24 1 A 0 1 2 8:16 A 0 0 1 E 0 1 2 8:48 A 0 0 1 E 0 1 2 65:96 A 0 0 1 E 0 1 2 65:112 A 0 0 1 E 0 1 2 8:80 A 0 0 1 E 0 1 2 8:112 A 0 0 1 E 0 1
2 8:144 A 0 0 1 E 0 1 2 8:208 A 0 0 1 E 0 1 2 8:240 A 0 0 1 E 0 1 2 65:0 A 0 0 1 E 0 1 2 65:32 A 0 0 1 E 0 1 2 65:64 A 0 0 1 E 0 1 2 8:32 A 0 0 1 E 0 1 2 8:64 A 0 0 1 E 0 1 2 6
5:80 A 0 0 1 E 0 1 2 65:128 A 0 0 1 E 0 1 2 8:96 A 0 0 1 E 0 1 2 8:128 A 0 0 1 E 0 1 2 8:160 A 0 0 1 E 0 1 2 8:176 A 0 0 1 E 0 1 2 8:192 A 0 0 1 E 0 1 2 8:224 A 0 0 1 E 0 1 2 6
5:16 A 0 0 1 E 0 1 2 65:48 A 0 0 1
Jul 31 11:27:31 | 3: disassemble map [0 0 24 1 service-time 0 1 2 8:16 1 1 service-time 0 1 2 8:48 1 1 service-time 0 1 2 65:96 1 1 service-time 0 1 2 65:112 1 1 service-time 0
 1 2 8:80 1 1 service-time 0 1 2 8:112 1 1 service-time 0 1 2 8:144 1 1 service-time 0 1 2 8:208 1 1 service-time 0 1 2 8:240 1 1 service-time 0 1 2 65:0 1 1 service-time 0 1 2
 65:32 1 1 service-time 0 1 2 65:64 1 1 service-time 0 1 2 8:32 1 1 service-time 0 1 2 8:64 1 1 service-time 0 1 2 65:80 1 1 service-time 0 1 2 65:128 1 1 service-time 0 1 2 8:
96 1 1 service-time 0 1 2 8:128 1 1 service-time 0 1 2 8:160 1 1 service-time 0 1 2 8:176 1 1 service-time 0 1 2 8:192 1 1 service-time 0 1 2 8:224 1 1 service-time 0 1 2 65:16
 1 1 service-time 0 1 2 65:48 1 1 ]

Here is a snippet of dmesg during booting where it is obvious that the raid gets assembled before multipath starts

[ 9.876554] sd 9:0:8:0: [sds] Attached SCSI disk
[ 9.877698] sd 10:0:7:0: [sdo] Attached SCSI disk
[ 9.879893] sdy: sdy1
[ 9.882101] sd 9:0:9:0: [sdu] Attached SCSI disk
[ 9.884816] sd 10:0:9:0: [sdt] Attached SCSI disk
[ 9.889349] sd 10:0:11:0: [sdy] Attached SCSI disk
[ 9.889435] sd 10:0:10:0: [sdv] Attached SCSI disk
[ 10.212844] sda: sda1 sda2 sda3
[ 10.213544] sd 6:0:0:0: [sda] Attached SCSI disk
[ 10.363962] md/raid:md127: device sdj1 operational as raid disk 5
[ 10.364065] md/raid:md127: device sdn1 operational as raid disk 6
[ 10.364127] md/raid:md127: device sdd1 operational as raid disk 2
[ 10.364189] md/raid:md127: device sdi1 operational as raid disk 7
[ 10.364252] md/raid:md127: device sde1 operational as raid disk 10
[ 10.364314] md/raid:md127: device sdl1 operational as raid disk 9
[ 10.364375] md/raid:md127: device sdh1 operational as raid disk 4
[ 10.364437] md/raid:md127: device sdg1 operational as raid disk 8
[ 10.364503] md/raid:md127: device sdb1 operational as raid disk 1
[ 10.364565] md/raid:md127: device sdf1 operational as raid disk 3
[ 10.364626] md/raid:md127: device sdm1 operational as raid disk 0
[ 10.369745] md/raid:md127: raid level 6 active with 11 out of 11 devices, algorithm 2
[ 10.382990] md127: detected capacity change from 0 to 5399923654656
[ 10.405082] md127: p1
[ 10.923154] BTRFS: device fsid 56c2ae5a-0445-11e8-8e50-002522bc9884 devid 1 transid 10258 /dev/sda2
[ 10.923798] BTRFS info (device sda2): disk space caching is enabled
[ 10.923863] BTRFS info (device sda2): has skinny extents
[ 11.675236] device-mapper: multipath service-time: version 0.3.0 loaded
[ 11.675582] device-mapper: table: 253:0: multipath: error getting device
[ 11.675646] device-mapper: ioctl: error adding target to table
[ 11.697403] device-mapper: table: 253:0: multipath: error getting device
[ 11.697473] device-mapper: ioctl: error adding target to table
[ 11.718781] device-mapper: table: 253:0: multipath: error getting device
[ 11.718850] device-mapper: ioctl: error adding target to table
[ 11.745371] device-mapper: table: 253:0: multipath: error getting device
[ 11.745440] device-mapper: ioctl: error adding target to table
[ 11.774933] device-mapper: table: 253:0: multipath: error getting device
[ 11.775002] device-mapper: ioctl: error adding target to table
[ 11.804235] device-mapper: table: 253:0: multipath: error getting device
[ 11.804304] device-mapper: ioctl: error adding target to table
[ 11.836200] device-mapper: table: 253:0: multipath: error getting device
[ 11.836269] device-mapper: ioctl: error adding target to table

------------------------------------------------------------------------------------------
my /etc/mdadm/mdadm.conf file I thought I would have disabled automatic raid assembly by adding AUTO -all but it still gets automatically assembled
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
DEVICE /dev/mapper/mpath*

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR <email address hidden>

# definitions of existing MD arrays

# This configuration was auto-generated on Thu, 26 Apr 2018 19:09:16 +0000 by mkconf
AUTO -all
------------------------------------------------------------------------------------------------------------------------------

My /etc/multipath/wwids file looks like
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/32000b45253c461e9/
/32000b45253c4c0f8/
/3201e000cca6e866c/
/3201e000cca6fd4fc/
/3201e000cca7099d8/
/3201e000cca70b424/
/3201e000cca70b614/
/3201e000cca718934/
/3201e000cca71894c/
/3201e000cca71a1a4/
/3201e000cca71ae28/
/3201e000cca75d734/

when I run multipath -v4 the following line is added to /etc/multipath/wwids
/3/

-----------------------------------------------------------------------------

my /etc/bindings/multipath file looks
mpatha 32000b45253c461e9
mpathb 32000b45253c4c0f8
mpathc 3201e000cca6e866c
mpathd 3201e000cca6fd4fc
mpathe 3201e000cca7099d8
mpathf 3201e000cca70b424
mpathg 3201e000cca70b614
mpathh 3201e000cca718934
mpathi 3201e000cca71894c
mpathj 3201e000cca71a1a4
mpathk 3201e000cca71ae28
mpathl 3201e000cca75d734

----------------------------------------------------
output of multipath -l

3 dm-0 NETAPP,X292_HVIPC560F15
size=559G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 9:0:0:0 sdb 8:16 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:1:0 sdd 8:48 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:10:0 sdw 65:96 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:11:0 sdx 65:112 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:2:0 sdf 8:80 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:3:0 sdh 8:112 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:4:0 sdj 8:144 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:5:0 sdn 8:208 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:6:0 sdp 8:240 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:7:0 sdq 65:0 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:8:0 sds 65:32 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 9:0:9:0 sdu 65:64 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:0:0 sdc 8:32 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:1:0 sde 8:64 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:10:0 sdv 65:80 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:11:0 sdy 65:128 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:2:0 sdg 8:96 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:3:0 sdi 8:128 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:4:0 sdk 8:160 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:5:0 sdl 8:176 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:6:0 sdm 8:192 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:7:0 sdo 8:224 active undef unknown
|-+- policy='service-time 0' prio=0 status=enabled
| `- 10:0:8:0 sdr 65:16 active undef unknown
`-+- policy='service-time 0' prio=0 status=enabled
  `- 10:0:9:0 sdt 65:48 active undef unknown

Question information

Language:
English Edit question
Status:
Expired
For:
Ubuntu multipath-tools Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
mr_willem (willem-crossbone) said :
#1

I could somehow find a solution,
I downloaded the source package and changed
#define DEFAULT_UID_ATTRIBUTE ID_SERIAL
to
#define DEFAULT_UID_ATTRIBUTE ID_SERIAL_SHORT

in libmultipath/defaults.h

now the disks get classified by SERIAL_SHORT which is reported to be the WWID without leading 3
so udevadm info --query all /dev/sdb shows for the disk with
wwid 3201e000cca6fd4fc
ID_SERIAL=3
and
ID_SERIAL_SHORT=201e000cca6fd4fc

I could Imagine that this is probably not the right way to solve it....

Revision history for this message
mr_willem (willem-crossbone) said :
#2

And the devices are still not present after booting but just when I disable raid and lvm and manually restart multipath afterwards :-(

Revision history for this message
mr_willem (willem-crossbone) said :
#3

OK I finally got It running automatically.

I had to play around with mdadm.conf few times.

The only thing that would have to be fixed, if other people want to use the nettapp storage with multipath
is that the devices are selected by ID_SERIAL_SHORT

Maybe that should be implemented somehow

If this is ever red by someone with similar problems, the working mdadm.conf is attached below.

It now reads
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
DEVICE /dev/mapper/mpath*

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR <email address hidden>

# definitions of existing MD arrays
ARRAY /dev/md/backup:netapp metadata=1.2 name=backup:netapp UUID=09850607:3a4a9d35:4ce2c7f7:0fefdf14 devices=/dev/mapper/mpatha-part1,/dev/mapper/mpathb-part1,/dev/mapper/mpathc-part1,/dev/mapper/mpathd-part1,/dev/mapper/mpathe-part1,/dev/mapper/mpathf-part1,/dev/mapper/mpathg-part1,/dev/mapper/mpathh-part1,/dev/mapper/mpathi-part1,/dev/mapper/mpathj-part1,/dev/mapper/mpathk-part1,/dev/mapper/mpathl-part1

# This configuration was auto-generated on Thu, 26 Apr 2018 19:09:16 +0000 by mkconf

Revision history for this message
Launchpad Janitor (janitor) said :
#4

This question was expired because it remained in the 'Open' state without activity for the last 15 days.