Login or register    

Multipath

What is multipath?

In computer storage, multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer system and it's mass storage devices through the buses, controllers, switches, and bridge devices connecting them.

Futher reading (multipath): http://en.wikipedia.org/wiki/Multipath_I/O

What problem does multipath software solve?
When we have multiple paths to a disk, our operating system views each path as a separate device. For example if we have 2 paths to a Logical Unit (LUN), linux will see /dev/sdb and /dev/sdc. We need the operating system to treat these multiple paths as one logical device. We do this with multipath software.

The multipath software will detect multiple paths to the LUN and map these paths to one device which we use like a normal disk.

Multipath Architecture Example

The industry standard for a fabric switched environment is 4 paths to a LUN. We describe this fault-tolerant architecture in the example below -

lhost999: an example server with two HBA ports (A & B)

  • HBA A
  • Fabric Switch A
  • HBA B
  • Fabric Switch B

SAN: an example SAN with two Controllers, each with two HBA ports

  • Controller 1
  • Fabric Switch A
  • Fabric Switch B
  • Controller 2
  • Fabric Switch A
  • Fabric Switch B

This architecture supplies lhost999 with 4 paths to the disk.

We could loose Controller 2 and Fabric Switch A and still have a single path to the disk between the host and the LUN:

lhost999 -> Fabric Switch B -> Controller 1 -> SAN -> LUN

Multipathing software not only maintains a logical device for the multiple paths, but also gracefully switches between paths during outages.

Multipath Architecture Definitions

HBA

In computer hardware, a host controller, host adapter, or host bus adapter connects a host system (the computer) to other network and storage devices.

Futher reading (HBA): https://en.wikipedia.org/wiki/Host_adapter

Fabric Switch

Switched fabric, switching fabric, or just fabric, is a network topology where network nodes connect with each other via one or more network switches (particularly via crossbar switches, hence the name).

Further reading (Fabric networking): http://en.wikipedia.org/wiki/Switched_fabric

Install Multipath Software

The following sub-sections will document how to install multipathing software on various operating systems:

Ubuntu

The following procedure will install multipathing software on an Ubuntu 12.04 LTS server:

Install Software:

apt-get install -y multipath-tools
apt-get install -y multipath-tools-boot

Enable the multipathd service if needed.

Redhat

The following procedure will install multipathing sofware on a Redhat Enterprise Linux 5 server:

Install Software:

yum install device-mapper-multipath

Enable the multipathd service if needed.

Ask for and verify multiple paths

  1. After you Install Multipath Software ask SAN admins to present multiple paths to server or blade profile. They may ask for the servers HBA WWNs.

  2. Wait for SAN admins to respond in the affirmative.

  3. Scan for SAN presented devices or reboot the host.

  4. Verify 4 paths to the device:

    multipath -ll
    multipath -v2
    

Admin Commands for SAN device

This section documents a list of useful SAN device administration commands.

List the HBAs

lspci | grep HBA
06:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
06:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
2a:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
2a:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
sudo grep -i qlogic /var/log/dmesg
QLogic Fibre Channel HBA Driver
 QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k
  QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x
 QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k
  QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x
 QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k
  QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x
 QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k
  QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x

View the paths and health

multipath -ll
multipath -v2
mpath0 (350002ac0343408b4) dm-6 3PARdata,VV
[size=100G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 3:0:0:0 sdb 8:16  [active][undef]
 \_ 5:0:0:0 sdc 8:32  [active][undef]

As you can see this server only seems to have 2 paths to the LUN.

Determine a LUN identifier

ls -hal /dev/disk/by-id/

Determine scsi_host or fc_host IDs

These ids are used when scanning the bus for devices:

ls /sys/class/scsi_host/ | grep host
ls /sys/class/fc_host/ | grep host

Scan for presented devices

Any of the following commands may be used to scan storage interconnects:

  1. Scan or rescan for presented LUNS without rebooting host:

    echo '- - -' > /sys/class/scsi_host/host0/scan
    echo '- - -' > /sys/class/scsi_host/host1/scan
    
  2. Perform a Loop Initialization Protocol (LIP) which scans the interconnect and causes the SCSI layer to be updated to reflect the devices currently on the bus.

    sudo echo "1" > /sys/class/fc_host/host3/issue_lip
    sudo echo "1" > /sys/class/fc_host/host4/issue_lip
    

Issue with PXE Booting Debian

On the UCS blade servers, only one path to the boot LUN should be presented to a host when pxe-booting. This only becomes an issue when rebuilding an existing system who had multiple paths configured. To resolve, message SAN and ask them to only present one path to the boot LUN.

further reading (Debian Installer):
https://wiki.debian.org/DebianInstaller/MultipathSupport

Comments

Leave a comment

Please login or register to leave a comment!