Skip to main content

Posts

Showing posts from 2015

View Path Selection Policy

View Path Selection Policy In Use via vSphere Client: Procedure Connect to vSphere Client Select a Host Click Configuration Select Storage Select a Datastore View "Datastore Details" for Path Selection In the above example the PSP in use is "Fixed (VMware)". via vSphere Web Client: Browse to Datastores in the vSphere Web Client navigator. Procedure Select the datastore. Click the Manage tab, and click Settings. Click Connectivity and Multipathing. If the datastore is shared, select a host to view multipathing details for its devices. Under Multipathing Details, review the multipathing policies and paths for the storage device that backs your datastore.

Display Storage Devices for a Host

Display Storage Devices for a Host Display Storage Devices for a Host in the vSphere Web Client Display all storage devices available to a host. If you use any third-party multipathing plug-ins, the storage devices available through the plug-ins also appear on the list. The Storage Devices view allows you to list the hosts' storage devices, analyze their information, and modify properties. Procedure Browse to the host in the vSphere Web Client navigator. Click the Manage tab, and click Storage . Click Storage Devices . All storage devices available to the host are listed under Storage Devices. To view details for a specific device, select the device from the list. Use tabs under Device Details to access additional information and modify properties for the selected device.

Migrate VM with svMotion

Migrate VM with svMotion Migrate a Virtual Machine with Storage vMotion in the vSphere Web Client Use migration with Storage vMotion to relocate a virtual machine’s configuration file and virtual disks while the virtual machine is powered on. You can change the virtual machine’s execution host during a migration with Storage vMotion. Prerequisites: Ensure that you are familiar with the requirements for Storage vMotion. Required privilege: Resource.Migrate powered on virtual machine Procedure: Right-click the virtual machine and select Migrate. To locate a virtual machine, select a datacenter, folder, cluster, resource pool, host, or vApp. Click the Related Objects tab and click Virtual Machines. Select Change datastore and click Next. Select the format for the virtual machine's disks. Select a virtual machine storage policy from the VM Storage Policy drop-down menu. Storage policies specify storage requirements for applications that run on the v

Storage I/O Control (SIOC)

SIOC VMware vSphere Storage I/O Control (SIOC) provides I/O prioritization of virtual machines running on a cluster of ESXi hosts with access to shared storage. It extends the constructs of shares and limits, which existed for CPU and memory, to manage storage utilization. Use SIOC to configure rules and policies to specify the business priority of each virtual machine using shares and limits. When I/O congestion is detected, Storage I/O Control dynamically allocates the available I/O resources to virtual machines according to the rules and policies, improving service levels and consolidation ratios. At a basic level SIOC is monitoring the end to end latency of a datastore. When there is congestion, SIOC reduces the latency by throttling back virtual machines that are using excessive I/O. SIOC will use the share values assigned to the virtual machine’s VMDKs to prioritize access to the datastore. The purpose of SIOC is to address the noisy neighbor problem, i.e. a l

Universally Unique Identifier (UUID)

UUID A Universally Unique IDentifier (UUID) is a 16-octet (128-bit) number. In its canonical form, a UUID is represented by 32 lowercase hexadecimal digits, displayed in five groups individually separated by hyphens, in the form 8-4-4-4-12. In general, UUID is used to uniquely identify an object or entity on the Internet. VMware storage architecture has multiple, unique identifiers: NAA & EUI (most common): Network Address Authority  & Extended Unique Identifier Guaranteed to be unique to the LUN The preferred method of identifying LUNs Generated by the storage device MPX  (local datastores): For devices that do not provide an NAA number, ESXi generates an MPX identifier Represents the local LUN or disk Takes the form mpx.vmhba<Adapter>:C<Channel>:T<Target>:L<LUN>, e.g. mpx.vmhba33:C0:T1:L0 Can be used in the exact same way as the NAA identifier VML: Can be used interchangeably with the NAA identifier and the MPX identi

SCSI Reservations & Atomic Test and Set (ATS)

SCSI Reservations SCSI reservations are used to control access to a shared SCSI device such as a LUN. An initiator or host sets a reservation/lock on a LUN in order to prevent another host from making changes to it. This is similar to the file-locking concept. A SCSI reservation conflict occurs if a host tries to access a datastore that was locked by another host. A Logical Unit Number (LUN) is an individual, unique, block-based storage device, the term LUN is often used interchangeably with disk and datastore, depending on the context. In a shared storage environment, when multiple hosts access the same Virtual Machine File System (VMFS) datastore, specific locking mechanisms are used. These locking mechanisms prevent multiple hosts from concurrently writing to the metadata and ensure that no data corruption occurs. SCSI Reservations SCSI reservation is a technique that manages disk contention by preventing I/O on an entire LUN for any ESXi host or VMs (other

Asymmetric Logical Unit Access (ALUA)

ALUA “A storage controller manages the flow of data between the server and the LUN, assigning two paths, in case one of the paths becomes unavailable.” An Active controller is available at all times. A passive controller sits idle until the active controller becomes unavailable. A dictionary definition of asymmetric is “not identical on both sides of a central line”. An Asymmetric Logical Unit Access (ALUA) suggests unequal paths between the server to the LUN. ALUA is implemented on active/active controllers. There are two types of active active controllers: Asymmetric Active Active Symmetric Active Active In an Asymmetric Active/Active storage controller architecture (also known as ALUA compliant devices), there is a path to the LUN via either controller and both controllers are defined as “active”, however only one of the paths is defined as an optimal (direct) path. This controller is also referred to as the preferred controller. IO requests arriving

VMDirectPath I/O

VMDirectPath I/O VMDirectPath I/O allows guest operating systems to directly access an I/O device, bypassing the virtualization layer. This direct connection frees up CPU cycles and improves performance for VMware ESXi hosts that utilize high-speed I/O devices, such as 10 GbE devices. A single VM can connect to up to four VMDirectPath PCI/PCIe devices. The disadvantages of using VMDirectPath I/O on a VM include: Unavailability or restrictions of vSphere features such as vMotion and DRS The adapter can no longer be used by any other virtual machine on the ESXi host A known exception to this is when ESXi is running on Cisco Unified Computing Systems (UCS) through Cisco Virtual Machine Fabric Extender (VM-FEX) distributed switches. DirectPath I/O allows virtual machine access to physical PCI functions on platforms with an I/O Memory Management Unit. The following features are unavailable for virtual machines configured with DirectPath: Hot adding and remov

Claim Rule

Claim Rules Multiple Multipath Plugins (MPPs) cannot manage the same storage device. As such,  claim rules allow you to designate which MPP is assigned to which storage device. Each claim rule identifies the following parameters: Vendor/model strings Transport, i.e. SATA, IDE, FC Adaptor, target, or LUN location Device driver Claim rules are defined within /etc/vmware/esx.conf on each ESX/ESXi host and can be managed via the vSphere CLI. Multipath policies (Fixed, MRU, RR) can be changed within vSphere, however any claim changes are conducted at the command line. "The PSA uses claim rules to determine which multipathing module should claim the paths to a particular device and to manage the device. esxcli storage core claimrule manages claim rules. Claim rule modification commands do not operate on the VMkernel directly. Instead they operate on the configuration file by adding and removing rules." Claim Rule  comman

Fixed PSP

Fixed – VMW_PSP_FIXED With the Fixed (VMW_PSP_FIXED) path selection policy, the host always uses the preferred path to the LUN when that path is available. If the host cannot access the LUN through the preferred path, it tries one of the alternative paths. The host automatically returns to the previously defined preferred path as soon as it becomes available again. A preferred path is a setting that NMP honors for devices claimed by the VMW_PSP_FIXED path selection policy . The first path discovered and claimed by the PSP is set as the preferred path. This is the default policy for LUNs presented from an Active/Active storage array. Fixed The default policy used with a SAN that is set to Active/Active Uses the designated preferred path whenever available If the preferred path should fail, another path is used until the preferred path is restored Once the preferred path is restored the data moves back onto the preferred path If you want the host to use

Most Recently Used (MRU) PSP

Most Recently Used (MRU) – VMW_PSP_MRU “The VMW_PSP_MRU policy selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESXi/ESX host switches to an alternative path and continues to use the new path while it is available. This is the default policy for LUNs presented from an Active/Passive array. ESXi/ESX does not return to the previous path if, or when, it returns; it remains on the working path until it, for any reason, fails.” " If the active path fails, then an alternative path will take over, becoming active. When the original path comes back online, it will now be the alternative path. " MRU The ESXi host selects the path that it most recently used This is the default used with a SAN that is set to Active/Passive With this policy, a path is chosen and continues to be used so long as it does not fail If it fails, another path is used, and it continues to be used so long as it does not fail, even i

Round Robin PSP

Round Robin - VMW_PSP_RR The ESXi host uses an automatic path selection algorithm rotating through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. On supported arrays multiple paths can be active simultaneously; otherwise, the default is to rotate between the paths. "This is the only path selection policy that uses more than one path during a data transfer session. Data is divided into multiple paths, and the paths are alternated to send data. Even though data is sent on only one path at a time, this increases the size of the pipe and  allows more data transfer in the same period of time." "Round Robin rotates the path selection among all available optimized paths and enables basic load balancing across the paths and fabrics." Round Robin policy provides load balancing by cycling I/O requests through all Active paths, sending a fixed (but configurable) number of

Path Selection Plug-In (PSP)

PSP Path Selection Plug-Ins (PSPs) are subplug-ins of the VMware NMP and are responsible for choosing a physical path for I/O requests. “The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical paths for that device." "The Path Selection Plug-in (PSP) performs the task of selecting which physical path to use for storage transport. The NMP assigns a default PSP from the claim rules based on the SATP associated with the physical device." Since multiple MPP’s cannot manage the same storage device, claim rules allow you to designate which MPP is assigned to which storage device. One way to think of PSP is which multipathing solution you are using to load balance. There are three Path Selection Plug-ins (PSPs) pathing policies included in vSphere: Fixed VMW_PSP_FIXED The host will use a fixed path that is either, set as the preferred path by the administrator, or is the first path discovered by t

Storage Array Type Plug-In (SATP)

SATP Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for array-specific operations. ESXi offers a Storage Array Type Plug-in (SATP) for every type of array that VMware supports in the VMware Hardware Compatibility List (HCL). It also provides default SATPs that support non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices. Each SATP accommodates special characteristics of a certain class of storage arrays and can perform the array-specific operations required to detect path state and to activate an inactive path. As a result, the NMP module itself can work with multiple storage arrays without having to be aware of the storage device specifics. The SATP monitors the health of each physical path and can respond to error messages from the storage array to handle path failover. There are third-party SATPs that the storage vendor can provide to take advantage of unique storage pro

Native Multipathing Plugin (NMP)

Native Multipathing Plugin (NMP) Native Multipathing Plugin (NMP) is a generic VMkernel Multipathing Plugin (MPP) provided by default from VMware and built into ESX/ESXi. NMP is used when the storage array does not have a third-party MPP solution. VMware provides a generic Multipathing Plugin (MPP) called Native Multipathing Plugin (NMP). What does NMP do? Manages physical path claiming and unclaiming Registers and de-registers logical storage devices Associates a set of physical paths with a specific logical storage device, or LUN Processes I/O requests to storage devices: Selects an optimal physical path for the request (load balance) Performs actions necessary to handle failures and request retries Supports management tasks such as abort or reset of logical storage devices. NMP is an extensible module that manages subplugins: Storage Array Type Plugins (SATPs) and Path Selection Plugins (PSPs). Storage Array Type Plugins (SATPs) is responsible for han

MPP - Multipathing Plug-in

MPP - Multipathing Plug-in The top-level plug-in in Pluggable Storage Architecture (PSA) is the Multipathing Plug-in (MPP). The MPP can be either the internal MPP, which is called the Native Multipathing Plug-in (NMP), or a third-party MPP supplied by a storage vendor. In ESXi storage is accessed through a VMware built-in MPP (NMP) or a third-party MPP. VMware’s Native Multipathing Plug-in is also a MPP. The process for connecting a storage array to VMware includes: Check the VMware Hardware Compatibility List (HCL) to determine if it is a supported array Use the built-in NMP to handle multipathing and load balancing, if in the HCL Use a supported third-party MPP, if there is need for additional functionality  provided by the MPP Third-party MPP solutions such as Symantec DMP and EMC PowerPath/VE, replace the behavior of the NMP, SATP, and PSP. It takes control of the path failover and the load-balancing operations for specified storage devices. Third-party M

Pluggable Storage Architecture (PSA)

PSA - (Storage API - Multipathing) With a SAN, to improve availability, the administrator can create multiple, redundant paths between hosts and storage targets or LUNs. However, multiple, redundant paths can introduce confusion if not properly managed. Multipathing software was created to minimize the confusion. Multipathing software takes control of all I/O requests; it chooses the best path for data transmission and manages path failover if an active path becomes unavailable. The multipathing software in ESXi is known collectively as the Pluggable Storage Arctitecture (PSA). The PSA is used to manage storage multipathing in ESXi. It is an open, modular framework that coordinates the simultaneous operation of other multipathing plug-ins (MPPs) created by VMware and/or third-party software developers. Pluggable Storage Architecture (PSA) vSphere Pluggable Storage Architecture (PSA) Framework is a special VMkernel layer in vSphere that manages storage multi-pathing. It

LUN Masking

LUN Masking “If you only implement SAN zoning, a host could gain access to every LUN that is available on any storage array in the zone.” Beyond zoning, LUN masking allows the administrator to further lock down access to the storage unit. LUN Masking is done on the storage controller and it hides specific LUNs from specific servers. LUN masking defines relationships between LUNs and individual servers and is used to further limit what LUNs are presented to a host. " Zoning is controlling which HBAs can see which array service processors through the switch fabric. LUN masking is controlling what the service processors tell the host with regard to the LUNs that they can provide. In other words, the storage administrator can configure the service processor to lie about which LUNs are accessible . ” "LUN masking is the ability of a host or an array to intentionally ignore WWNs that it can actively see (in other words, that are zoned to it).&q

Zoning

Zoning Zoning is a logical separation of  traffic between host and resources. A SAN zone is similar to an Ethernet VLAN.  It creates a logical, exclusive path between nodes on the SAN. The SAN  makes storage available to servers in the form of LUNs. The LUN is potentially  accessible by every server on the SAN. In almost every case, having a LUN  accessible by multiple servers can create problems such as data corruption as  multiple servers contend for the same disk resources. To minimize such issues,  zoning and or LUN masking can be employed to isolate and protect SAN storage  devices. Zoning and LUN masking allow the administrator to dedicate storage  devices on the SAN to specific server(s). A SAN is populated by nodes. Nodes can be either  servers or storage devices. Servers are typically referred to as initiators,  storage devices typically are the targets. Zoning creates a relationship  between initiator and target nodes on the SAN. With zoning, you create  relationships t

Disk Shares

Disk shares "Proportional share" method. If multiple VMs access the same VMFS datastore and the same logical unit number (LUN), there may be contention as they try to access the same virtual disk resource at the same time. Under certain conditions, the administrator may need to prioritize disk access for specific virtual machines; this can be done using disk shares . If you want to give priority to specific VMs when there is access contention, you can do so using disk shares. Using disk shares, the administrator can ensure that the more important virtual machines get preference over less important virtual machines for I/O bandwidth allocation. “Shares is a value that represents the relative metric for controlling disk bandwidth to all virtual machines. The values are compared to the sum of all shares of all virtual machines on the server.” “Disk shares are relevant only within a given host. The shares assigned to virtual machines on one host have no eff

Virtual Machine Storage Policies

VM Storage Policies Virtual machine storage policies enable the administrator to define storage requirements for the virtual machine and determine: Which storage/datastore is provided for the virtual machine How the virtual machine is placed within the storage Which data services are offered to the virtual machine Storage policies define the storage requirements for the virtual machine, or more specifically, they define the storage requirements for the applications running in the virtual machine. Applying a storage policy to a virtual machine determines whether or not the datastore meets all the requirements of the VM as defined by the storage policy. Storage policies identify the appropriate storage to use for a given virtual machine. “In software-defined storage environments, such as Virtual SAN and Virtual Volumes, the storage policy also determines how the virtual machine storage objects are provisioned and allocated within the storage resource to guarantee t

Virtual Disk Alignment

Align Virtual Disks In a properly aligned storage architecture, the units of data in the various storage layers are aligned in such a way as to maximize I/O efficiency. In an unaligned storage architecture, accessing a file from the OS layer results in extra I/O operations at the other storage layers. Shown here is a properly aligned structure with Windows guest OS clusters , VMFS blocks and SAN chunks . I/O access at the guest OS layer results in a minimum amount of I/O access at the other layers: VMFS and SAN. I/O operation on Cluster 1 results in I/O operations of one block at the VMFS layer and one chunk at the SAN layer. No extra SAN data access required. In an unaligned structure the units of data at the other layers are not laid out on even boundaries. I/O operation on a single cluster may result in many additional I/O operations at the other storage layers. I/O access (Cluster 2 ) fro

VSAN - Virtual SAN

VSAN “ VMware Virtual SAN abstracts and pools server-side flash and disk into shared pools of capacity with policy-based management and application-centric data services. ” Virtual SAN pools server-attached hard disk drives and flash (SSDs, and PCI-e) drives to create a distributed shared datastore that abstracts the storage hardware and provides a Software-Defined Storage (SDS) tier for virtual machines. VSAN leverages local storage from a number of ESXi hosts in a cluster and creates a distributed shared datastore. This shared datastore can be used for VM placement and core vSphere technologies such as vMotion, DRS, VMware Site Recovery Manager, etc. “VSAN leverages the power of any solid state drives (SSDs) on the hosts in the cluster for read caching and write buffering to improve performance." In a hybrid architecture VSAN has both Flash (SSDs and PCI-e) and hard disk drive (HDD) devices. The Flash devices are utilized as a read cache and

VSA - vSphere Storage Appliance

VSA "Shared Storage for Everyone" The vSphere Storage Appliance (VSA) allows local storage on ESXi hosts to be used as shared storage, enabling the use of many storage-dependent virtualization features, such as vMotion, distributed resource scheduling (DRS), and high availability (HA), without the need for a SAN. VSA also functions as a storage cluster, providing continued availability of all the data it stores even if one node in the cluster fails. The VSA is a VM on each host in the cluster. If one host fails, the VSA fails over automatically to one of the other hosts. vSphere Storage Appliance (VSA) is VMware software that transforms existing, local server storage into shared storage that can be shared by up to three vSphere hosts. vSphere Storage Appliance VMware Virtual SAN Description Low cost, simple shared storage for small deployments Scale-out distributed storage designed for virtualized/cloud environme

Array & Virtual Disk Thin Provisioning

Array and Virtual Disk Thin Provisioning Array Thin Provisioning enables the creation of a datastore that is logically larger than what the array can support. The consequence is there may not be enough physical space available when needed. Array thin provisioning is done in the storage arrays before and/or independent of the virtualization layer. Array thin provisioning allows the organization to maximize space utilization and delay the purchase of additional capacity. It minimizes CAPEX. Array Thin Provisioning: You can overallocate or oversubscribe the storage by allowing a server to claim more storage than has physically been set aside for it. This increases flexibility when you don’t know which hosts will grow, yet you are sure there will be growth Physical storage capacity is dedicated to each host only when data is actually written to the disk blocks Virtual Disk Thin Provisioning "Virtual disk thin provisioning controls how much of the datas

Thin Provisioning - Provisioning - Storage Features

Thin Provisioning Array Thin Provisioning allows you to create a datastore that is logically larger than what the array can actually support. In a general sense, thin provisioning of disks allows you to overpromise what you can possibly deliver. " Space required for thin-provisioned virtual disk is allocated and zeroed on demand as the space is used. Unused space is available for use by other virtual machines. " For example, if an administrator allocates 200 GB to a new virtual machine, and if the virtual machine uses only 40 GB, the remaining 160 GB are available for allocation to other virtual machines. As a virtual machine requires more space, vSphere provides additional blocks (if available) to it up to the originally allocated size, 200 GB in this case. By using thin provisioning, administrators can create virtual machines with virtual disks of a size that is necessary in the long-term without having to immediately commit the total dis