Skip to main content


Showing posts from July 5, 2015

Asymmetric Logical Unit Access (ALUA)

ALUA “A storage controller manages the flow of data between the server and the LUN, assigning two paths, in case one of the paths becomes unavailable.” An Active controller is available at all times. A passive controller sits idle until the active controller becomes unavailable. A dictionary definition of asymmetric is “not identical on both sides of a central line”. An Asymmetric Logical Unit Access (ALUA) suggests unequal paths between the server to the LUN. ALUA is implemented on active/active controllers. There are two types of active active controllers: Asymmetric Active Active Symmetric Active Active In an Asymmetric Active/Active storage controller architecture (also known as ALUA compliant devices), there is a path to the LUN via either controller and both controllers are defined as “active”, however only one of the paths is defined as an optimal (direct) path. This controller is also referred to as the preferred controller. IO requests arriving

VMDirectPath I/O

VMDirectPath I/O VMDirectPath I/O allows guest operating systems to directly access an I/O device, bypassing the virtualization layer. This direct connection frees up CPU cycles and improves performance for VMware ESXi hosts that utilize high-speed I/O devices, such as 10 GbE devices. A single VM can connect to up to four VMDirectPath PCI/PCIe devices. The disadvantages of using VMDirectPath I/O on a VM include: Unavailability or restrictions of vSphere features such as vMotion and DRS The adapter can no longer be used by any other virtual machine on the ESXi host A known exception to this is when ESXi is running on Cisco Unified Computing Systems (UCS) through Cisco Virtual Machine Fabric Extender (VM-FEX) distributed switches. DirectPath I/O allows virtual machine access to physical PCI functions on platforms with an I/O Memory Management Unit. The following features are unavailable for virtual machines configured with DirectPath: Hot adding and remov

Claim Rule

Claim Rules Multiple Multipath Plugins (MPPs) cannot manage the same storage device. As such,  claim rules allow you to designate which MPP is assigned to which storage device. Each claim rule identifies the following parameters: Vendor/model strings Transport, i.e. SATA, IDE, FC Adaptor, target, or LUN location Device driver Claim rules are defined within /etc/vmware/esx.conf on each ESX/ESXi host and can be managed via the vSphere CLI. Multipath policies (Fixed, MRU, RR) can be changed within vSphere, however any claim changes are conducted at the command line. "The PSA uses claim rules to determine which multipathing module should claim the paths to a particular device and to manage the device. esxcli storage core claimrule manages claim rules. Claim rule modification commands do not operate on the VMkernel directly. Instead they operate on the configuration file by adding and removing rules." Claim Rule  comman

Fixed PSP

Fixed – VMW_PSP_FIXED With the Fixed (VMW_PSP_FIXED) path selection policy, the host always uses the preferred path to the LUN when that path is available. If the host cannot access the LUN through the preferred path, it tries one of the alternative paths. The host automatically returns to the previously defined preferred path as soon as it becomes available again. A preferred path is a setting that NMP honors for devices claimed by the VMW_PSP_FIXED path selection policy . The first path discovered and claimed by the PSP is set as the preferred path. This is the default policy for LUNs presented from an Active/Active storage array. Fixed The default policy used with a SAN that is set to Active/Active Uses the designated preferred path whenever available If the preferred path should fail, another path is used until the preferred path is restored Once the preferred path is restored the data moves back onto the preferred path If you want the host to use

Most Recently Used (MRU) PSP

Most Recently Used (MRU) – VMW_PSP_MRU “The VMW_PSP_MRU policy selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESXi/ESX host switches to an alternative path and continues to use the new path while it is available. This is the default policy for LUNs presented from an Active/Passive array. ESXi/ESX does not return to the previous path if, or when, it returns; it remains on the working path until it, for any reason, fails.” " If the active path fails, then an alternative path will take over, becoming active. When the original path comes back online, it will now be the alternative path. " MRU The ESXi host selects the path that it most recently used This is the default used with a SAN that is set to Active/Passive With this policy, a path is chosen and continues to be used so long as it does not fail If it fails, another path is used, and it continues to be used so long as it does not fail, even i

Round Robin PSP

Round Robin - VMW_PSP_RR The ESXi host uses an automatic path selection algorithm rotating through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. On supported arrays multiple paths can be active simultaneously; otherwise, the default is to rotate between the paths. "This is the only path selection policy that uses more than one path during a data transfer session. Data is divided into multiple paths, and the paths are alternated to send data. Even though data is sent on only one path at a time, this increases the size of the pipe and  allows more data transfer in the same period of time." "Round Robin rotates the path selection among all available optimized paths and enables basic load balancing across the paths and fabrics." Round Robin policy provides load balancing by cycling I/O requests through all Active paths, sending a fixed (but configurable) number of

Path Selection Plug-In (PSP)

PSP Path Selection Plug-Ins (PSPs) are subplug-ins of the VMware NMP and are responsible for choosing a physical path for I/O requests. “The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical paths for that device." "The Path Selection Plug-in (PSP) performs the task of selecting which physical path to use for storage transport. The NMP assigns a default PSP from the claim rules based on the SATP associated with the physical device." Since multiple MPP’s cannot manage the same storage device, claim rules allow you to designate which MPP is assigned to which storage device. One way to think of PSP is which multipathing solution you are using to load balance. There are three Path Selection Plug-ins (PSPs) pathing policies included in vSphere: Fixed VMW_PSP_FIXED The host will use a fixed path that is either, set as the preferred path by the administrator, or is the first path discovered by t

Storage Array Type Plug-In (SATP)

SATP Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for array-specific operations. ESXi offers a Storage Array Type Plug-in (SATP) for every type of array that VMware supports in the VMware Hardware Compatibility List (HCL). It also provides default SATPs that support non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices. Each SATP accommodates special characteristics of a certain class of storage arrays and can perform the array-specific operations required to detect path state and to activate an inactive path. As a result, the NMP module itself can work with multiple storage arrays without having to be aware of the storage device specifics. The SATP monitors the health of each physical path and can respond to error messages from the storage array to handle path failover. There are third-party SATPs that the storage vendor can provide to take advantage of unique storage pro

Native Multipathing Plugin (NMP)

Native Multipathing Plugin (NMP) Native Multipathing Plugin (NMP) is a generic VMkernel Multipathing Plugin (MPP) provided by default from VMware and built into ESX/ESXi. NMP is used when the storage array does not have a third-party MPP solution. VMware provides a generic Multipathing Plugin (MPP) called Native Multipathing Plugin (NMP). What does NMP do? Manages physical path claiming and unclaiming Registers and de-registers logical storage devices Associates a set of physical paths with a specific logical storage device, or LUN Processes I/O requests to storage devices: Selects an optimal physical path for the request (load balance) Performs actions necessary to handle failures and request retries Supports management tasks such as abort or reset of logical storage devices. NMP is an extensible module that manages subplugins: Storage Array Type Plugins (SATPs) and Path Selection Plugins (PSPs). Storage Array Type Plugins (SATPs) is responsible for han

MPP - Multipathing Plug-in

MPP - Multipathing Plug-in The top-level plug-in in Pluggable Storage Architecture (PSA) is the Multipathing Plug-in (MPP). The MPP can be either the internal MPP, which is called the Native Multipathing Plug-in (NMP), or a third-party MPP supplied by a storage vendor. In ESXi storage is accessed through a VMware built-in MPP (NMP) or a third-party MPP. VMware’s Native Multipathing Plug-in is also a MPP. The process for connecting a storage array to VMware includes: Check the VMware Hardware Compatibility List (HCL) to determine if it is a supported array Use the built-in NMP to handle multipathing and load balancing, if in the HCL Use a supported third-party MPP, if there is need for additional functionality  provided by the MPP Third-party MPP solutions such as Symantec DMP and EMC PowerPath/VE, replace the behavior of the NMP, SATP, and PSP. It takes control of the path failover and the load-balancing operations for specified storage devices. Third-party M