This can be configure with Redhat or Suse distribution
https://access.redhat.com/solutions/3072701
Packages installation:
sysfsutils device-mapper device-mapper-multipath
The package « multipath-tools » content the following components:
/sbin/multipath: command to scan multipathing devices on the system, it creates logical definitions and update device-mapper kernel.
/usr/bin/multipathd: daemon to manage and monitor multipathing devices. Logging events into syslog.
/sbin/devmap_name: allow to rename devices into kernel (use udev)
/sbin/kpartx: command for mapping devmaps with device partitions.
Copy the example configuration file multipath.conf into /etc :
# cp /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults /etc/multipath.conf
Modify the file /etc/multipath.conf:
[root@BKPSRV1 etc]# cat /etc/multipath.conf # This is a basic configuration file with some examples, for device mapper # multipath. # For a complete list of the default configuration values, see # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults # For a list of configuration options with descriptions, see # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated # Blacklist all devices by default. Remove this to enable multipathing # on the default devices. blacklist { # devnode "*" devnode "^cciss!c[0-9]d[0-9]*" } ## By default, devices with vendor = "IBM" and product = "S/390.*" are ## blacklisted. To enable mulitpathing on these devies, uncomment the ## following lines. #blacklist_exceptions { # device { # vendor "IBM" # product "S/390.*" # } #} ## Use user friendly names, instead of using WWIDs as names. defaults { user_friendly_names yes } devices { device { vendor "NETAPP " product "LUN" path_grouping_policy group_by_prio getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout"/opt/netapp/santools/mpath_prio_ontap /dev/n" features "1 queue_if_no_path" path_checker readsector0 failback immediate } }
Step #3 : Activation du mutlipathing
Il faut activer le multipathing au boot avec les commandes suivantes :
# chkconfig --add multipathd # chkconfig multipathd on
Or
# mpathconf --enable
Le démarrage immédiat du daemon s’effectue comme suit:
# modprobe multipath # modprobe dm-multipath # modprobe dm-round-robin # /etc/init.d/multipathd start
Step #4 : Vérifications
Vérifications :
# lsmod|grep multipath multipath 10961 0
# lsmod|grep dm dm_round_robin 4929 1 dm_multipath 21841 2 dm_round_robin dm_snapshot 19073 0 dm_zero 3649 0 dm_mirror 32465 0 dm_mod 68097 16 dm_multipath,dm_snapshot,dm_zero,dm_mirror # ps -ef|grep multipath root 3069 1 0 15:00 ? 00:00:00 /sbin/multipathd
Check multipath config:
# /sbin/mpathconf multipath is disabled find_multipaths is disabled user_friendly_names is enabled dm_multipath module is loaded multipathd is chkconfiged off
Enable with user_friendly_names and find_multipaths
# /sbin/mpathconf --enable --find_multipaths y --user_friendly_names y --with_chkconfig y --with_multipathd y
[root@BKPSRV1 etc]# multipath -ll mpath1 (360a9800050334942614a4f6377436a51) dm-10 NETAPP,LUN [size=145G][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=8][active] \_ 0:0:0:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdf 8:80 [active][ready] \_ round-robin 0 [prio=2][enabled] \_ 0:0:1:1 sdd 8:48 [active][ready] \_ 1:0:1:1 sdh 8:112 [active][ready] mpath0 (360a98000503349424b5a4f6376687347) dm-9 NETAPP,LUN [size=1.0T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=8][active] \_ 0:0:1:0 sdc 8:32 [active][ready] \_ 1:0:1:0 sdg 8:96 [active][ready] \_ round-robin 0 [prio=2][enabled] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 1:0:0:0 sde 8:64 [active][ready] # dmsetup ls --target=multipath mpath7 (253, 3) mpath6 (253, 2) # dmsetup table mpath7: 0 35692800 multipath 0 0 1 1 round-robin 0 2 1 8:16 100 8:48 100 mpath6: 0 5760 multipath 0 0 1 1 round-robin 0 2 1 8:0 100 8:32 100 VolGroup00-LogVol01: 0 4063232 linear 104:2 66650496 VolGroup00-LogVol00: 0 66650112 linear 104:2 384
Important: Un disque multipathé est visible dans /dev plusieurs fois: /dev/dm-N, /dev/mapper/mpathN et /dev/mpath/mpathN Il faut toujours créé une partition ou un volume group sur /dev/mapper/mpathN, qui est le seul device persistent.
To suppress a multipath disk or all:
dmsetup remove <mpathxx> dmsetup remove_all
# systemctl reload multipathd
For redhat or fedora
[root@BKPSRV1 etc]# /sbin/mpathconf --find_multipaths y [root@BKPSRV1 etc]# multipath -v3 Nov 17 11:39:18 | loading /lib64/multipath/libcheckdirectio.so checker Nov 17 11:39:18 | loading /lib64/multipath/libprioconst.so prioritizer Nov 17 11:39:18 | sda: not found in pathvec Nov 17 11:39:18 | sda: mask = 0x3f Nov 17 11:39:18 | sda: dev_t = 8:0 Nov 17 11:39:18 | sda: size = 104857600 Nov 17 11:39:18 | sda: vendor = AIX Nov 17 11:39:18 | sda: product = VDASD Nov 17 11:39:18 | sda: rev = 0001 Nov 17 11:39:18 | sda: h:b:t:l = 0:0:1:0 Nov 17 11:39:18 | sda: path state = running Nov 17 11:39:18 | sda: 51200 cyl, 64 heads, 32 sectors/track, start at 0 Nov 17 11:39:18 | sda: serial = 000e583a0000d40000000143bb7850cf.17 Nov 17 11:39:18 | sda: get_state Nov 17 11:39:18 | sda: path checker = directio (controller setting) Nov 17 11:39:18 | sda: checker timeout = 120000 ms (sysfs setting) Nov 17 11:39:18 | directio: starting new request Nov 17 11:39:18 | directio: io finished 4096/0 Nov 17 11:39:18 | sda: state = up Nov 17 11:39:18 | sda: uid_attribute = ID_SERIAL (internal default) Nov 17 11:39:18 | sda: uid = SAIX_VDASD_000e583a0000d40000000143bb7850cf.17 (udev) Nov 17 11:39:18 | sda: detect_prio = 1 (config file default) Nov 17 11:39:18 | sda: prio = const (controller setting) Nov 17 11:39:18 | sda: prio args = (null) (controller setting) Nov 17 11:39:18 | sda: const prio = 1 Nov 17 11:39:18 | sr0: device node name blacklisted Nov 17 11:39:18 | dm-0: device node name blacklisted Nov 17 11:39:18 | dm-1: device node name blacklisted
For a complete list of the default configuration values, run:
[root@BKPSRV1 etc]# multipath -t
Then adapt your config file, and add the blacklist for local disk with no multipathing if needed: /etc/multipath.conf
... blacklist { wwid SAIX_VDASD_000e583a0000d40000000143bb7850cf.17 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^(td|hd)[a-z]" }
Then restart multipathing:
[root@BKPSRV1 etc]# systemctl reload multipathd.service or [root@BKPSRV1 etc]# chkconfig multipathd reload
Multipath command line:
[root@BKPSRV1 etc]# multipathd -k multipathd> ? multipath-tools v0.4.9 (05/33, 2016) CLI commands reference: list|show paths list|show paths format $format list|show status list|show daemon list|show maps|multipaths list|show maps|multipaths status list|show maps|multipaths stats list|show maps|multipaths format $format list|show maps|multipaths topology list|show topology list|show map|multipath $map topology list|show config list|show blacklist list|show devices list|show wildcards add path $path remove|del path $path add map|multipath $map remove|del map|multipath $map switch|switchgroup map|multipath $map group $group reconfigure suspend map|multipath $map resume map|multipath $map resize map|multipath $map reset map|multipath $map reload map|multipath $map disablequeueing map|multipath $map restorequeueing map|multipath $map disablequeueing maps|multipaths restorequeueing maps|multipaths reinstate path $path fail path $path quit|exit shutdown map|multipath $map getprstatus map|multipath $map setprstatus map|multipath $map unsetprstatus forcequeueing daemon restorequeueing daemon multipathd> show paths hcil dev dev_t pri dm_st chk_st dev_st next_check 0:0:1:0 sda 8:0 1 undef ready running orphan multipathd> show paths hcil dev dev_t pri dm_st chk_st dev_st next_check 0:0:1:0 sda 8:0 1 undef ready running orphan multipathd> show devices available block devices: sda devnode whitelisted, monitored sr0 devnode blacklisted, unmonitored dm-0 devnode blacklisted, unmonitored dm-1 devnode blacklisted, unmonitored multipathd> show daemon pid 24935 running multipathd>
/etc/multipath.conf
vendor "IBM" product "2145" path_grouping_policy "group_by_prio" path_selector "service-time 0" # Used by Red Hat 7.x prio "alua" path_checker "tur" failback "immediate" no_path_retry 5 rr_weight uniform rr_min_io_rq "1" dev_loss_tmo 120
DM-MPIO for dev_loss_tmo
After a problem is detected on an FC port and it set to infinity, the SCSI layer can wait until 2147483647 seconds (68 years) before removing it from the system. The default value is determined by the OS.
All Linux hosts should have a dev_loss_tmo setting, but the value in seconds is how long to wait for the device/paths to be pruned. The suggested duration is 120-150 seconds, but extended duration is also supported.
Care needs to be taken if it is too low since if paths are pruned, then they also need to be rediscovered and if too low, that may require manual rescan later. If inquiry timeout is right, the host should be able to re-add the paths when the SVC nodes are restored.
If the inquiry is too short such as 20 seconds then the inquiry may timeout before the paths are ready.
# multipathd -k multipathd> show multipaths status name failback queueing paths dm-st write_prot mpathb immediate 18 chk 2 active rw mpathf immediate 18 chk 2 active rw mpathe immediate 18 chk 2 active rw mpatha immediate 18 chk 2 active rw mpathg - - 2 active rw mpathh - - 2 active rw mpathj - - 2 active rw mpathi - - 2 active rw
Add /etc/multipath.conf:
devices { device { vendor "HP" product "MSA 2040 SAS" path_grouping_policy group_by_prio getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n" path_selector "round-robin 0" path_checker tur features "0" hardware_handler "0" prio alua failback immediate rr_weight uniform no_path_retry 18 rr_min_io 100 rr_min_io_rq 1 } }
Now:
multipathd> show multipaths status name failback queueing paths dm-st write_prot mpathb immediate 18 chk 2 active rw mpatha immediate 18 chk 2 active rw mpathe immediate 18 chk 2 active rw mpathf immediate 18 chk 2 active rw mpathh immediate 18 chk 2 active rw mpathg immediate 18 chk 2 active rw mpathj immediate 18 chk 2 active rw mpathi immediate 18 chk 2 active rw
Install the multipath-tools and multipath-tools-boot packages
Installing OS with Multipath Support, example for Ubuntu
At the installer prompt
install disk-detect/multipath/enable=true
If multipath devices are found these will show up as /dev/mapper/mpath<X> during installation.
https://help.ubuntu.com/lts/serverguide/multipath-admin-and-troubleshooting.html
# dmsetup remove -f [map name] # dmesetup remove -f 360060e80166bac0000016bac000000da # multipath -f [LUN name] # multipath -f 360060e80166bac0000016bac000000da
This is dramatically simplified by the use of UUIDs to identify devices as an intrinsic label. Simply install multipath-tools-boot and reboot. This will rebuild the initial ramdisk and afford multipath the opportunity to build it's paths before the root file system is mounted by UUID.
Whenever multipath.conf is updated, so should the initrd by executing
update-initramfs -u -k all
The reason being is multipath.conf is copied to the ramdisk and is integral to determining the available devices for grouping via it's blacklist and device sections.
The procedure is exactly the same as illustrated in the previous section called Moving root File Systems from a Single Path to a Multipath Device.
http://fibrevillage.com/storage/10-device-mapper-multipath-configuration-on-linux
https://www.thegeekdiary.com/how-to-rebuild-the-initramfs-with-multipath-in-centos-rhel-6-and-7/
https://access.redhat.com/solutions/3072701
https://access.redhat.com/discussions/2158911
WARNING: Not using lvmetad because duplicate PVs were found. WARNING: Use multipath or vgimportclone to resolve duplicate PVs? WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad. WARNING: PV xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxx on /dev/sdx was already found on /dev/sdy. WARNING: PV xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxx on /dev/sdy was already found on /dev/sdz. WARNING: PV xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxx on /dev/sdz was already found on /dev/mapper/mpatha.
# cat /etc/lvm/lvm.conf | grep filter filter = [ "a|/dev/mapper/mpath*|", "a|/dev/sdb3|", "r|.*|" ] global_filter = [ "a|/dev/mapper/mpath*|", "a|/dev/sdb3|", "r|.*|" ]
Use the followin command to find the right filter, and upadate the file /etc/lvm/lvm.conf:
[root@redhat1:/etc/lvm/# pvs -a --config 'devices { filter = [ "a|.*|" ] }' --noheadings \ -opv_name,fmt,vg_name | awk 'BEGIN { f = ""; } \ NF == 3 { n = "\42a|"$1"|\42, "; f = f n; } END \ { print "Suggested filter line for /etc/lvm/lvm.conf:\n \ filter = [ "f"\"r|.*|\" ]" }' WARNING: Not using lvmetad because duplicate PVs were found. WARNING: Use multipath or vgimportclone to resolve duplicate PVs? WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdd3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdf3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdh3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdj3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdl3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdn3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU on /dev/sdp3 was already found on /dev/sdb3. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. WARNING: PV iLyQd5-vSRJ-k84X-Jpx8-Z3dM-k97W-xFV1kU prefers device /dev/sdb3 because device is used by LV. Suggested filter line for /etc/lvm/lvm.conf: filter = [ "a|/dev/mapper/mpatha|", "a|/dev/sdb3|", "a|/dev/sdd3|", "a|/dev/sdf3|", "a|/dev/sdh3|", "a|/dev/sdj3|", "a|/dev/sdl3|", "a|/dev/sdn3|", "a|/dev/sdp3|", "r|.*|" ] [root@redhat1:/etc/lvm/# vi /etc/lvm/lvm.conf
Unable to remove the multipath device after unmapping the LUN from the server.
Attempting to flush a multipath map with “multipath -f” or “multipath -F” results in “map in use”: Raw
# multipath -f mpath7 mpath7: map in use
Resolution
Locate any subsystem or process holding the multipath device open. See diagnostic steps for possible tools and techniques. For any subsystem or process holding the multipath device open, stop the process, or issue commands to release the multipath device.
Some examples of possible holders of a multipath device and the commands to release it: A filesystem exists on the multipath device and is currently mounted. Unmount the filesystem and if it exists in /etc/fstab, remove it. One or more partition mapping(s) still exists on the multipath device. Use "kpartx -d" on the multipath device to remove the device partition mapping(s). The multipath device was used by LVM, and still has device mapper state in the kernel. Use "lvchange -an" to deactivate any logical volume(s) associated with the multipath device. A list of logical volumes associated with the multipath device may be found by examining the output of "lvs -o +devices". If "lvchange -an" fails, the logical volume is only partially removed, or there are blocked processes with I/O outstanding on the device, use "dmsetup remove -f" followed by "dmsetup clear" on the multipath device. See dmsetup man page for full explanation of these commands.
Once all holders of the device have been removed, the multipath device should be flushed with "multipath -f".
Root Cause
The multipath device was held open by at least one process or subsystem.
Check with
Use 'lsof' to attempt to find anyone holding the device open. Check 'dmsetup' output for any device mapper maps that depend on the multipath device Check the if there is a device for mpath7 in /dev/mapper/mpath7 Check multipath -v4 -ll
Path offline
[root@lnxa087 scripts]# /usr/sbin/multipath -ll mpatha (36005076801818664680000000000062d) dm-0 IBM ,2145 size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=enabled | |- 1:0:2:0 sdb 8:16 failed faulty offline | |- 2:0:2:0 sdh 8:112 failed faulty offline | |- 3:0:2:0 sde 8:64 failed faulty offline | `- 4:0:2:0 sdg 8:96 failed faulty offline `-+- policy='service-time 0' prio=10 status=active |- 1:0:0:0 sda 8:0 active ready running |- 2:0:0:0 sdc 8:32 active ready running |- 3:0:0:0 sdd 8:48 active ready running `- 4:0:0:0 sdf 8:80 active ready running [root@lnxa087 scripts]# pvs /dev/sdb: open failed: No such device or address /dev/sde: open failed: No such device or address /dev/sdg: open failed: No such device or address /dev/sdh: open failed: No such device or address PV VG Fmt Attr PSize PFree /dev/mapper/mpatha3 centos lvm2 a-- 38.99g 0
First try to rescan:
[root@lnxa087 scripts]# for i in $(ls /sys/class/scsi_host/host*/scan); do echo "- - -" > $i; done
If not success, check adapter status (required sysfsutils package)
[root@lnxa087 etc]# systool -c fc_host -v Class = "fc_host" Class Device = "host1" Class Device path = "/sys/devices/vio/30000004/host1/fc_host/host1" dev_loss_tmo = "300" fabric_name = "0xc0507608249e00ac" issue_lip = <store method only> maxframe_size = "2048 bytes" node_name = "0xc0507608249e00ac" port_id = "0xbf1b13" port_name = "0xc0507608249e00ac" port_state = "Online" port_type = "NPIV VPORT" speed = "8 Gbit" supported_classes = "Class 2, Class 3" tgtid_bind_type = "wwpn (World Wide Port Name)" uevent = Device = "host1" Device path = "/sys/devices/vio/30000004/host1" uevent = "DEVTYPE=scsi_host" ...
If adapters OK, the remove failed paths:
[root@lnxa087 etc]# rescan-scsi-bus.sh -r
Rescan
[root@lnxa087 etc]# sg_map -x /dev/sg1 1 0 0 0 0 /dev/sda /dev/sg3 2 0 0 0 0 /dev/sdc /dev/sg4 3 0 0 0 0 /dev/sdd /dev/sg6 4 0 0 0 0 /dev/sdf [root@lnxa087 scripts]# for i in $(ls /sys/class/scsi_host/host*/scan); do echo "- - -" > $i; done
Check again:
[root@lnxa087 etc]# [root@lnxa087 etc]# sg_map -x /dev/sg0 1 0 2 0 0 /dev/sdb /dev/sg1 1 0 0 0 0 /dev/sda /dev/sg2 2 0 2 0 0 /dev/sde /dev/sg3 2 0 0 0 0 /dev/sdc /dev/sg4 3 0 0 0 0 /dev/sdd /dev/sg5 3 0 2 0 0 /dev/sdg /dev/sg6 4 0 0 0 0 /dev/sdf /dev/sg7 4 0 2 0 0 /dev/sdh [root@lnxa087 etc]# /usr/sbin/multipath -ll mpatha (36005076801818664680000000000062d) dm-0 IBM ,2145 size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 1:0:2:0 sdb 8:16 active ready running | |- 2:0:2:0 sde 8:64 active ready running | |- 3:0:2:0 sdg 8:96 active ready running | `- 4:0:2:0 sdh 8:112 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 1:0:0:0 sda 8:0 active ready running |- 2:0:0:0 sdc 8:32 active ready running |- 3:0:0:0 sdd 8:48 active ready running `- 4:0:0:0 sdf 8:80 active ready running
Path offline
[root@linux01]/usr/bin # vgs WARNING: Device for PV nNtH6F-Ppre-ROWM-DH8e-vrHe-8f08-PJHMjO not found or rejected by a filter. WARNING: Device for PV yCgOKm-oDZO-lSns-11fT-K11G-1025-xEHww2 not found or rejected by a filter. WARNING: Device for PV nNtH6F-Ppre-ROWM-DH8e-vrHe-8f08-PJHMjO not found or rejected by a filter. Couldn't find device with uuid nNtH6F-Ppre-ROWM-DH8e-vrHe-8f08-PJHMjO.
Ceanup config using
[root@linux01]/usr/bin # pvscan --cache [root@linux01]/usr/bin # pvscan