{"node_id": "f3e5bfc8-2f95-11f1-adae-e86a64d24d78", "revisions": [{"id": "f3e6a6f9-2f95-11f1-ac1b-e86a64d24d78", "node_id": "f3e5bfc8-2f95-11f1-adae-e86a64d24d78", "user_id": "edc3f576-2f95-11f1-900f-e86a64d24d78", "author": "foxhop", "data": "\r\nMultipath\r\n##########\r\n\r\nWhat is multipath?\r\n In computer storage, multipath I/O is a fault-tolerance and performance\r\n enhancement technique whereby there is more than one physical path between\r\n the CPU in a computer system and it's mass storage devices through the\r\n buses, controllers, switches, and bridge devices connecting them.\r\n \r\n Futher reading (multipath): http://en.wikipedia.org/wiki/Multipath_I/O  \r\n\r\nWhat problem does multipath software solve?\r\n When we have multiple paths to a disk, our operating system views each\r\n path as a separate device.  For example if we have 2 paths to a Logical\r\n Unit (LUN), linux will see */dev/sdb* and */dev/sdc*.  We need the operating\r\n system to treat these multiple paths as one logical device.  We do this\r\n with multipath software.  \r\n\r\nThe multipath software will detect multiple paths to the LUN and map these\r\npaths to one device which we use like a normal disk.\r\n\r\nMultipath Architecture Example\r\n==============================\r\n\r\nThe industry standard for a fabric switched environment is 4 paths to a LUN.\r\nWe describe this fault-tolerant architecture in the example below -\r\n\r\n**lhost999**: an example server with two HBA ports (A & B)\r\n \r\n* HBA *A*\r\n \r\n - Fabric Switch A\r\n\r\n* HBA *B*\r\n\r\n - Fabric Switch B\r\n\r\n**SAN**: an example SAN with two Controllers, each with two HBA ports\r\n\r\n* Controller 1\r\n\r\n - Fabric Switch A\r\n - Fabric Switch B\r\n\r\n* Controller 2\r\n\r\n - Fabric Switch A\r\n - Fabric Switch B\r\n\r\nThis architecture supplies lhost999 with 4 paths to the disk.  \r\n\r\nWe could loose *Controller 2* and *Fabric Switch A* and still\r\nhave a single path to the disk between the host and the LUN::\r\n\r\n  lhost999 -> Fabric Switch B -> Controller 1 -> SAN -> LUN\r\n\r\nMultipathing software not only maintains a logical device for the multiple\r\npaths, but also gracefully switches between paths during outages.\r\n\r\nMultipath Architecture Definitions\r\n==================================\r\n\r\n**HBA**\r\n In computer hardware, a host controller, host adapter, or host bus adapter\r\n connects a host system (the computer) to other network and storage devices.\r\n\r\n Futher reading (HBA):\r\n https://en.wikipedia.org/wiki/Host_adapter\r\n\r\n**Fabric Switch**\r\n Switched fabric, switching fabric, or just fabric, is a network topology\r\n where network nodes connect with each other via one or more network switches\r\n (particularly via crossbar switches, hence the name). \r\n\r\n Further reading (Fabric networking): \r\n http://en.wikipedia.org/wiki/Switched_fabric\r\n\r\n\r\nInstall Multipath Software\r\n==========================\r\n\r\nThe following sub-sections will document how to install multipathing software\r\non various operating systems:\r\n\r\n\r\nUbuntu\r\n------\r\n\r\nThe following procedure will install multipathing software on an\r\nUbuntu 12.04 LTS server:\r\n\r\nInstall Software::\r\n\r\n apt-get install -y multipath-tools\r\n apt-get install -y multipath-tools-boot\r\n\r\nEnable the multipathd service if needed.\r\n\r\nRedhat\r\n------\r\n\r\nThe following procedure will install multipathing sofware on a\r\nRedhat Enterprise Linux 5 server:\r\n\r\nInstall Software::\r\n\r\n yum install device-mapper-multipath\r\n\r\nEnable the multipathd service if needed.\r\n \r\nAsk for and verify multiple paths\r\n---------------------------------\r\n\r\n#. After you `Install Multipath Software`_ ask SAN admins to present\r\n   multiple paths to server or blade profile.  They may ask for\r\n   the servers HBA WWNs.\r\n\r\n#. Wait for SAN admins to respond in the affirmative.\r\n\r\n#. Scan for SAN presented devices or reboot the host.\r\n\r\n#. Verify 4 paths to the device::\r\n\r\n    multipath -ll\r\n    multipath -v2\r\n\r\nAdmin Commands for SAN device\r\n=============================\r\n\r\nThis section documents a list of useful SAN device administration commands.\r\n\r\n\r\nList the HBAs\r\n-------------\r\n\r\n.. code-block:: bash\r\n \r\n lspci | grep HBA\r\n\r\n.. code-block:: bash\r\n\r\n 06:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)\r\n 06:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)\r\n 2a:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)\r\n 2a:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)\r\n \r\n.. code-block:: bash\r\n\r\n sudo grep -i qlogic /var/log/dmesg\r\n\r\n.. code-block:: bash\r\n\r\n  QLogic Fibre Channel HBA Driver\r\n   QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k\r\n    QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x\r\n   QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k\r\n    QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x\r\n   QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k\r\n    QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x\r\n   QLogic Fibre Channel HBA Driver: 8.03.01.04.05.05-k\r\n    QLogic QLE2462 - QLogic 4GB FC Dual-Port PCI-E HBA for IBM System x\r\n\r\n\r\nView the paths and health\r\n--------------------------\r\n\r\n.. code-block:: bash\r\n\r\n multipath -ll \r\n multipath -v2\r\n\r\n.. code-block:: bash\r\n\r\n mpath0 (350002ac0343408b4) dm-6 3PARdata,VV\r\n [size=100G][features=1 queue_if_no_path][hwhandler=0]\r\n \\_ round-robin 0 [prio=0][active]\r\n  \\_ 3:0:0:0 sdb 8:16  [active][undef]\r\n  \\_ 5:0:0:0 sdc 8:32  [active][undef]\r\n\r\nAs you can see this server only seems to have 2 paths to the LUN. \r\n\r\n\r\nDetermine a LUN identifier\r\n--------------------------\r\n\r\n.. code-block:: bash\r\n\r\n ls -hal /dev/disk/by-id/\r\n\r\n\r\nDetermine scsi_host or fc_host IDs\r\n----------------------------------\r\n\r\nThese ids are used when scanning the bus for devices: \r\n\r\n.. code-block:: bash\r\n\r\n ls /sys/class/scsi_host/ | grep host\r\n ls /sys/class/fc_host/ | grep host\r\n\r\nScan for presented devices\r\n--------------------------\r\n\r\nAny of the following commands may be used to scan storage interconnects:\r\n\r\n #. Scan or rescan for presented LUNS without rebooting host:\r\n\r\n    .. code-block:: bash\r\n\r\n       echo '- - -' > /sys/class/scsi_host/host0/scan\r\n       echo '- - -' > /sys/class/scsi_host/host1/scan\r\n\r\n #. Perform a Loop Initialization Protocol (LIP) which scans the interconnect\r\n    and causes the SCSI layer to be updated to reflect the devices currently\r\n    on the bus.\r\n\r\n    .. code-block:: bash\r\n           \r\n       sudo echo \"1\" > /sys/class/fc_host/host3/issue_lip\r\n       sudo echo \"1\" > /sys/class/fc_host/host4/issue_lip\r\n\r\n\r\nIssue with PXE Booting Debian\r\n=============================\r\n\r\nOn the UCS blade servers, only one path to the boot LUN should be presented\r\nto a host when pxe-booting.  This only becomes an issue when rebuilding an\r\nexisting system who had multiple paths configured.  To resolve, message SAN\r\nand ask them to only present one path to the boot LUN.\r\n\r\n further reading (Debian Installer):\r\n  https://wiki.debian.org/DebianInstaller/MultipathSupport", "source_format": "rst", "revision_number": 1, "created": 1396966479000}], "count": 1}