June 06, 2015

vSphere Storage Terminologies - vFRC - vSpere Flash Read Cache

vSphere Flash Read Cache (vFRC)

vSphere Flash Read Cache enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource. Using vSphere Flash Read Cache, solid-state caching space can be assigned to VMs in much the same way as CPU cores, RAM, or network connectivity is assigned to VMs.

vFRC
  • Introduced at vSphere 5.5
  • Reduces latency
  • Leverage local SSD (solid-state drive) or PCIe flash device to create a “Flash Pool” for VMs
  • Locally caches virtual machine read I/O on an ESXi host
  • Offloads I/O from the SAN to the local SSD
  • Enables virtualization of previously difficult to virtualize high I/O intensive applications
  • Requires Enterprise Plus licensing
  • Flash Pool is managed as a resource similar to CPU and storage
  • Does not cache writes, does only “write-through” caching
vFRC components include the Flash Pool and the Flash Cache Module:
  • Flash Pool:
    • “Virtual Flash Cache”
    • A pool of SSD or PCIe flash devices on the ESXi host
    • Used by vFRC to accelerate I/O
    • vFRC creates a VMware proprietary filesystem call VFFS on the flash device
    • Read cache can be allocated to virtual machine disks
    • Read operations from the VM are directed to the flash cache
    • Common reads are stored in the Virtual Flash cache
    • Successful cache hits should accelerate I/O
  • Flash Cache Module:
    • “Virtual Flash Device”
    • Built into vSphere 5.5 and later as a loadable kernel module
    • Sits between the Virtual Machine and the storage
    • Purpose is to cache the data that is read by the Guest OS
vFRC uses an adaptive caching mechanism that tries to identify long-term data and keep that data in cache since there is a higher likelihood that it will be reused.”

"vSphere Flash Read Cache’s main use case is to accelerate performance for read intensive operations based workloads ranging from VDI, business critical applications which have requirements for data locality and read intensive operations."

“vSphere Flash Read Cache is designed to enhance the performance of applications that have I/O patterns based on read-intensive operations.”

Requirements for vFRC
  • Ensure environment is compatible with vSphere 5.5 or higher
  • Each host configured with Enterprise Plus license
  • A minimum of one and a maximum of 32 hosts per cluster
  • Supports SATA, SAS and PCI Express storage device interfaces
  • Works with VMs on VMFS, NFS and RDM datastores
  • Virtual machine hardware version 10 or higher
  • vSphere Web Client
Limitations for vFRC
  • A maximum of eight Flash-based devices per VFFS
  • One Virtual Flash Resource (VFFS) per host
  • A maximum capacity of 32TB supported for the VFFS per host
  • The maximum of 4TB per Flash device
  • Not supported on vSphere FT enabled virtual machines
  • vSphere vMotion, vSphere DRS and vSphere HA work only on hosts with available vSphere Flash Read Cache
The types of storage devices supported for vSphere Flash Read Cache is SATA, SAS, and PCI Express. Check the VMware Hardware Compatibility List (HCL) for an up-to-date list of supported Flash-based devices.

Although vSphere Flash Read Cache doesn’t need to be enabled on every host in a cluster it is recommended to have it on each host.

vFRC is compatible with vSphere vMotion, svMotion and XvMotion. The vSphere vMotion workflow has been modified to include two new migration settings for cache contents:
  • Always migrate the cache contents
    • Copy
    • VM migration proceeds only if all of the cache contents can be migrated to the destination host
  • Do not migrate the cache contents
    • Drop
    • Drops the write-through cache. Cache is rewarmed on the destination host
In the case of a vSphere HA event, the virtual machine will fail to power on if the VFFS on the target host is full.

"VFFS is a derivative of the VMFS file system that is optimized to group flash devices into a single pool of resources. VFFS is not accessible in the UI. VFFS is a vSphere resource and not a conventional datastore.

A vSphere hypervisor can utilize as much as 4TB of the Virtual Flash Resource in the form of a Virtual Flash Host Swap Cache for host memory–caching purposes.

Reference:

vSphere Storage Terminologies - IOPS

IOPS

Performance is represented in storage products by three statistics: throughput, latency and IOPS.

Throughput is the speed of the data transfer into or out of the storage device. It is a measure of the amount of data that can be pushed through a point in the data path in a given amount of time.
  • Expressed as bytes (kilobytes or megabytes) per second in a storage environment
  • The higher the value, the better
Throughput – a measure of the data transfer rate, or I/O throughput, measured in bytes per second or MegaBytes per second (MBPS).

Latency is a measure of how long it takes for an IO transaction to begin from the requesting application’s viewpoint.
  • Measured in fractions of a second
  • The smaller the latency number, the better
Latency – a measure of the time taken to complete an I/O request, also known as response time. This is frequently measured in milliseconds (one thousandth of a second).

IOPS is a measure of the number of storage transactions processed through a system every second.
  • Input Output Operations per Second
  • “How many Input or Output (IO) operations can be performed by the storage device every second”
  • “How often IOs can occur”
  • A measure of how many IO transactions a disk can complete in a second
  • “How often the storage device can perform a data transfer"
  • “How quickly each drive can process IO requests”
  • Measured in Input/Output Operations per Second (IOPS)
  • Varies depending on the type of IO being done
  • The greater the number of IOPS, the better the performance
IOPS – I/Os per second – a measure of the total I/O operations (reads and writes) issued by an application server.

When comparing IOPS, take into account such things as the size of the transaction, and the type of transaction, i.e. sequential vs. random IO, etc.

To calculate per disk IOPS, use the average latency and average seek time. This can be obtained from the disk manufacturer.

Bandwidth vs. Throughput
Bandwidth is the theoretical maximum amount of data that can travel through a 'channel'. Throughput is how much data actually travels through the 'channel' taking into account various overheads. Your Internet Service Provider (ISP) might provide you with a 1 Gbps Internet connection (Bandwidth), however on a particular session, you might only get 50 Mbps download speed (Throughput).

Reference:

June 05, 2015

vSphere Storage Terminologies - NPIV

NPIV

With Fibre Channel (FC), nodes connect to each other via FC ports in order to exchange information. The following are some of the ports available on a FC network:
  • N_Port: This is a port on the Fibre Channel fabric. It is an end node port, used to connect a node to the fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage device.
    • N_Ports connect to the FC switch using an F_Port
  • F_Port: This is a port on a Fibre Channel switch that is connected to an N_Port. It is used to connect an N_Port point-to-point to a switch. This port makes use of the 24-bit port address.
    • F_Port is the port into which a server’s HBA or a storage array’s target port is connected
  • E_Port: This is an expansion (or extension) port. It connects switches. Switches connect to one another using an inter-switch link (ISL). The ports at the end of the ISL are E_Ports.
    • The connection between two E_Ports forms an Inter-Switch Link (ISL).
  • Other ports defined by the Fibre Channel protocol include the following and are less common:
    • FL_Port: fabric loop port
    • NL_Port: node loop port
    • L_Port: loop hub port
    • G_Port: generic port
    • U_Port: universal port
Once it is initialized and registered, an N_Port (an end node port) has the World-Wide Port Name (WWPN) of the physical HBA and is given an N_Port_ID. Each N_Port enables one connection to the FC fabric.

In a virtualization environment where multiple virtual machines can exist on a physical host with one or a small number of HBAs, there will be more virtual machines than N_Ports (or HBAs) to connect those virtual machines to the FC fabric.

N_Port_ID Virtualization (NPIV) enables the virtualization of WWPN and N_Port_ID. It is an ANSI T11 standard. It describes how a single FC physical HBA port can be registered on a fabric as multiple, virtual WWPNs. Using NPIV, an ESXi host for example may allow multiple virtual machines to connect to the SAN using just one or a few physical HBAs.

WWPN vs. N_Port_ID
Each FC HBA has both a world-wide node name (WWNN) and a world-wide port number (WWPN). The WWNN is a node name, i.e. it is used to identify the physical HBA on the server or the SAN switch chassis. The WWPN, which is more relevant in our discussion, uniquely identifies a port on a physical HBA. A dual-port HBA would have one WWNN and two WWPNs.

A world-wide port number (WWPN) is unique 64-bit identifier on each Fibre Channel port on a Fibre Channel device.

WWNN and WWPN are globally unique 64-bit addresses.
With the WWPN, a FC HBA port can register (Fabric Login (FLOGI)) with the switch and be given another FC address known as N_Port_ID.

The N_Port_ID is a 24-bit field used to route frames through a Fibre Channel network. The 24-bit field is composed of three bytes:
  • 1st byte (Domain ID):
    • This is the address of the FC switch or director
    • It is unique to the fabric
    • There are theoretically (256 – # reserved addresses) = 239 switches possible in a SAN fabric
  • 2nd byte (Area ID):
    • This part of the address is used to identify the individual switch ports
    • The Area ID is very similar to the 3rd byte
    • Identifies an N-type port that is connected to a switch
  • 3rd byte (Port ID)
    • This address is used to identify a single FC object on the SAN fabric
    • Provides 256 addresses for identifying attached N_Ports
N_Port_ID is a 24-bit address and it is used by the FC port for communication with the SAN.
The N_Port_ID is used for routing; the WWN/WWPN is used for both device level access control in a storage controller (LUN-masking) and for switch level access control on a FC switch (zoning).

With NPIV, a single physical HBA port can be assigned multiple virtual WWPNs.

In a SAN fabric each attached will be given a WWPN and an N_Port_ID.

NPIV
  • N-Port ID Virtualization
  • An ANSI T11 standard
  • Describes how a single Fibre Channel Physical HBA port can register with a fabric using several worldwide port names (WWPNs), what might be considered Virtual WWNs
NPIV is available only for virtual machines with RDM disks, also the HBA and switch used to access the storage must be NPIV-capable.

"NPIV technology allows a single Fibre Channel HBA port to register with the Fibre Channel fabric using several worldwide port names (WWPNs). This ability makes the HBA port appear as multiple virtual ports, each having its own ID and virtual port name."

Reference:

vSphere Storage Terminologies - Storage Containers

Storage Containers

For management purposes, storage systems group virtual volumes (VVOL) into storage containers. Storage containers can be used to isolate or logically partition virtual machines with diverse needs and requirements.

Storage container is logical abstraction to which Virtual Volumes are mapped. vSphere will map storage container to datastore (of type Virtual Volume) and provide applicable datastore level functionality.

Storage containers are setup on the storage side by the storage administrator, who controls their size and number. Unlike with a physical LUN, the size of a storage container can be extended and storage containers do not need a file system. Another advantage of storage containers is that it allows storage capabilities to be applied on a per-VM basis instead of on a per-LUN basis. As a result the different VMs in a Storage Container can have different storage capabilities.

There is a limit of 256 storage containers per host.

To the vSphere administrator, the Storage Container is seen simply as another datastore. To the storage administrator, each VMware Virtual Machine object, e.g. VMDK, VM configuration file, etc. is simply a virtual volume.


There is typically a minimum of three Virtual Volumes per virtual machine:
  • Config
    • VMFS formatted thick provisioned
    • 4 GB volume
    • hosts VMX, log and other miscellaneous files
  • Data
    • The equivalent of a VMDK (referred to as VMFS virtual volume)
    • Thin provisioned
  • Swap
    • The equivalent of the VSWP file
    • Thick provisioned
    • Same size as the virtual machine memory (less memory reservations)
Additionally when a VMware snapshot is taken, one or two more virtual volume types are created:
  • Snapshot
    • One per Data VVOL with VM snapshot
    • Stores snapshot delta changes
  • Memory
    • Create if “memory dump” option is selected when snapshot is created
    • Same size as the virtual machine memory
Each VM snapshot would add one snapshot per virtual disk and one memory snapshot (if requested), therefore a VM with three virtual disks would have (1+1+3) = 5 Virtual Volumes. Snapshotting that VM would add (3+1) = 4 Virtual Volumes.”

Summary of VVols:

Reference:

vSphere Storage Terminologies - VVOL

VVols

"The VVols architecture is part of the VMware VASA 2.0 specification, which defines a new architecture for VM-level storage array abstraction. VASA 2.0 includes new interfaces to query storage policies to enable VMware’s Storage Policy Based Management (SPBM) to make intelligent decisions about virtual disk placement and compliance."

"VMware Virtual Volumes (VVols) provides VM-level granularity by introducing a 1:1 mapping of VM objects to storage volumes and supports policy-based management to simplify storage management in virtualized server environments.

Prior to VVols, storage arrays integrated with vSphere at the datastore level using VMware’s Virtual Machine File System (VMFS). With Vvols, there is no need for a file system and going forward, users can choose to use VMFS or VVols (or both).

VVOL:
  • Virtualizes SAN and NAS devices
  • Virtual disks are natively represented on arrays
  • Enables VM granular storage operations using array-based data services
  • Storage policy-based management enables automated consumption at scale
  • Industry-wide initiative supported by major storage vendors
  • Included with vSphere
  • Five types of VVols (objects): Config, Data, Swap, Snapshot, Memory
  • NO filesystem needed (VMFS is history)
  • Virtual machine objects are stored natively on the array
With Virtual Volumes each VM object gets its own virtual datastore, with its own policies, and the storage administrator is only responsible for creating the storage container and is freed from managing LUNs.

Since a virtual machine is made up of a number of different files (VMDKs, configuration, snapshots, etc.) and each of these data objects is stored as a separate vVol, a VM is made up of multiple vVols.

With VVOLs enabled, creation of a new VM also creates multiple new VVOLs on the Storage Container:
  • On creation – two VVOLs, one for config and one for each virtual disk
  • On power on – one VVOL for swap
  • Additional – VVOLs for snapshot VM and snapshot memory if enabled
With VVOLs, storage processing such as cloning and snapshots are offloaded to the storage arrays, facilitating faster, more efficient storage processing at the vSphere layer.

VVOL functions:
  • Enables VMware to offload per VMDK-level operations to storage systems
  • Enables storage systems to provide data services to individual applications and VM’s
  • Enables application profile-based provisioning, monitoring and management of VM Volumes via vSphere and VASA
  • Enables protocol-agnostic management of VM objects, regardless of the protocol front-end (FC, iSCSI, NFS) used
Virtual volumes allow the storage and management of a larger number (hundreds, thousands) of VM objects to reside on an array and it enables detailed visibility into the storage environment.

"VVOLs let block storage vendors provide per-VM data services like snapshots and replication by essentially storing each VM, or VMDK, in its own logical volume.”

Without VVOLs, managing separate LUNs per VM or VM data object would take up lots of resources and effort and would quickly run up agains the vSphere (and FC/SCSI protocol) LUN limit of 256.

Without VVOL, think of the VMDK as existing at the ESXi hypervisor layer.

With VVOL, the VMDK is in effect pushed down into the storage system. As a result, ESXi is able to offload more of the processing to the storage system as storage now has more insight into the context of what the blocks of data holding the virtual machine objects mean. Cloning, snapshots, replication, etc. are handled more efficiently and operations are more granular, being VM based as opposed to LUN based.

Customer and Provider
The diagram below shows an interaction between the Customer and Provider. Array says this is what I can offer in step #1 (e.g. RAID types, capacity, format, etc.). The customer says here is what I need in step #2 (e.g. IOPS, format, etc.). Array responds with what the storage resource requested by customer or “no, I cannot meet your needs.
A vVol is a “data object” identified by a GUID. vVol is more than just a datastore, more like a storage pool with a set of capabilities. It is a Datastore + Services + Metadata.

Vendor provider acts as a server in the vSphere environment to which vCenter will connect to get information about the available array topology and capabilities and reports information about one or more storage arrays and can support connections with one or more vCenter Servers.

Reference:

vSphere Storage Terminologies - VASA

VASA

vSphere Storage APIs - Storage Awareness (VASA) is one of a family of APIs used by third-party hardware, software, and storage providers to develop components that enable storage arrays to expose the array’s capabilities, configurations, health and events to the vCenter Server.

The following are the Storage APIs in this family:
  • Storage APIs - Multipathing, also known as the Pluggable Storage Architecture (PSA)
  • Storage APIs - Array Integration, formerly known as VAAI
  • Storage APIs - Storage Awareness
  • Storage APIs - Data Protection
  • Storage APIs - Site Recovery Manager
You will find Storage APIs - Storage Awareness referred to by other names, e.g.
  • vStorage APIs for Storage Awareness, VASA
  • vSphere APIs for Storage Awareness
It enables more advanced out-of-band communication between storage arrays and the virtualization layer.

Storage APIs - Storage Awareness (VASA). This is an example of a vCenter Server-based API. It enables storage arrays to inform the vCenter Server about the array’s capabilities, configurations, health and events.

Storage array capabilities are exported to the vSphere APIs using VASA.

What is VASA?
  • vSphere Storage API - Storage Awareness  (formerly vSphere API for Storage Awareness, commonly known as VASA)
  • Introduced with vSphere 5.
  • A set of standardized VMware APIs that allow storage vendors to push storage-related information into vCenter database
  • Enables VMware vCenter Server to detect the capabilities of the storage array LUNs and their datastores.
  • Improves the visibility of Physical Storage infrastructure through vSphere client
vSphere Storage APIs - Storage Awareness requirements:
  • vCenter Server 5.0 (or later)
  • ESX/ESXi hosts version 4.0 (or later)
  • Compliant storage arrays (that supports Storage API - Storage Awareness)
A Storage (or VASA) Provider is a software component that is offered either by vSphere or by a third party. Third-party storage providers are also known as vendor providers. The third-party storage provider is typically installed on the storage side and acts as a storage awareness service in the vSphere environment.

vSphere (or built-in) storage providers typically run on the ESXi hosts and do not require registration. For example, the storage provider that supports Virtual SAN becomes registered automatically when you enable Virtual SAN.

The Storage Provider exposes three classes of information to vCenter Server:
  • Storage Topology: Lists physical storage array elements’ information
  • Storage Capabilities: Storage capabilities and the services offered by the storage array
  • Storage State: Health status of the storage array, including alarms and events for configuration changes
"The VASA provider enables communication between the vSphere stack — ESXi hosts, vCenter server, and the vSphere Web Client — on one side, and the storage system on the other. The VASA provider runs on the storage side and integrates with vSphere Storage Monitoring Service (SMS) to manage all aspects of Virtual Volumes storage."

The Vendor Provider is a third-party storage provider.

The vendor provider is made available by the storage vendor and installed on the storage array or on a management device.

Use the vSphere Compatibility Guide to verify which Vendor (or VASA) Providers are supported by your chosen storage array. E.g. below is a list of HP vendor providers supported at up to ESXi 5.5 U2:
Vendor Provider Benefits
  • Storage System capability, configuration, status is made visible to vCenter Server. This enables an "end-to-end" view of your infrastructure from the vCenter Server.
  • Storage Capabilities information can be used in system-defined entries for Storage Profiles
The VASA provider is supplied by VMware or the storage vendor. Storage (or VASA) providers can run anywhere, except vCenter Server. Typically, the third-party storage provider runs on either the storage array service processor or on a standalone host.

VASA Provider or Storage Provider:  A storage‐side software component that acts as a web service interface (API) for the vSphere environment.  It can either run in the array or outside the array.

"The VASA 2.0 specification describes the use of virtual volumes to provide ease of access and ease of manageability to each VM datastore. Each VMware Virtual Machine Disk (VMDK) is provisioned as a separate VVol within the storage system. A single point of access on the fabric is provisioned via a protocol endpoint from the host to the storage."

"With VASA, capabilities such as RAID level, thin or thick provisioned, device type (SSD, Fast Class, or Nearline) and replication state can now be made visible from within vCenter Server’s disk management interface. This allows vSphere administrators to select the appropriate disk for virtual machine placement based on its needs."

VASA vs. VAAI
VASA and VAAI belong to a family of vSphere Storage APIs, use vendor providers to enhance several vSphere features and solutions. The two features work independently, and can coexist:

VASA (Storage APIs - Storage Awareness) collects configuration, capability and storage health information from storage arrays. One use-case is to build Storage Profiles based on array capabilities.
VAAI (Storage APIs - Array Integration) provides for hardware acceleration of certain storage operations, reducing CPU overhead on the host.

Reference:

June 02, 2015

vSphere Storage Terminologies - VAAI

VAAI

The vSphere Storage APIs - Array Integration, formerly/commonly known as VAAI is a feature introduced in ESXi/ESX 4.1 that provides hardware acceleration functionality.

Storage APIs - Array Integration (VAAI) enables VMware hypervisor to offload specific virtual machine and storage management operations to storage hardware. With assistance from the storage hardware, the host performs storage operations faster and consumes less CPU, memory, and storage fabric bandwidth.

"In the vSphere 5.x and later releases, the ESXi extensions (referred to as Storage APIs - Array Integration) are implemented as the T10 SCSI based commands. As a result, with the devices that support the T10 SCSI standard, ESXi host can communicate directly with storage hardware and does not require the VAAI plug-ins."

Without T10 SCSI support, ESXi reverts to using the VAAI plug-ins, installed on your host.The VAAI plug-ins are vendor-specific and are provided either by VMware or a development partner.

VAAI's main purpose is to leverage array capabilities to carry out storage related tasks more efficiently:
  • Offloading tasks to reduce overhead
  • Benefit from enhanced array mechanisms 
"Atomic Test & Set (ATS) is a superior alternative to SCSI Reservations when it comes to metadata locking on VMFS."

For vSphere to take advantage of VAAI, the storage array has to support VAAI hardware acceleration. One way to check whether the storage array is supported for VAAI hardware acceleration is to check the VMware Hardware Configuration List.

"Here's a quick rundown of the storage offloads available in vSphere 5.5:
  • Hardware-Assisted
    (VMFS is a shared cluster file system that requires file locking to prevent multiple hosts writing to it at the same time. When one host makes an update to the VMFS metadata, a locking mechanism is required to maintain file system integrity and prevent another host updating the same metadata.)
    • Locking, also called atomic test and set (ATS), this feature supports discrete VM locking without the use of LUN level SCSI reservations.
    • “ATS enables granular locking of block storage devices beyond the basic full-LUN reservations included in SCSI from days of yore. This allows more VMs per LUN once VAAI is turned on. NFS doesn’t need ATS since locking is a non-issue and VM files aren’t shared the same way LUNs are.”
    • vSphere uses SCSI reservations when VMFS metadata needs to be updated. Hardware-assisted locking allows for disk locking per sector instead of locking the entire LUN. This offers a dramatic increase in performance when lots of metadata updates are necessary (such as when powering on many VMs at the same time).
    • A SCSI reservation locks a whole LUN/datastore and prevents other hosts from doing metadata updates of a VMFS volume on the datastore. This can introduce contention when many virtual machines are using the same datastore. “It is a limiting factor for scaling to very large VMFS volumes. ATS is a lock mechanism that must modify only a disk sector on the VMFS volume.”
          • VAAI uses a single atomic test and set operation (ATS), which is an alternative method to VMware’s SCSI-2 reservations. ATS allows a VMFS datastore to scale to more VMs per datastore, and more ESXi hosts can attach to each LUN.
    • vSphere 6.0 offers the use of ATS with a fall-back to SCSI-2 reservations under certain conditions. With the ATS+SCSI method, the first lock attempt uses ATS and then falls back to using the older SCSI-2 Reserve/Release mechanism.
    • Scalable lock management (sometimes called “atomic test and set,” or ATS) can reduce locking-related overheads, speeding up thin-disk expansion as well as many other administrative and file system-intensive tasks. This helps improve the scalability of very large deployments by speeding up provisioning operations like boot storms, expansion of thin disks, snapshots, and other tasks.
  • Hardware-Accelerated
    (Hardware Acceleration cloning (sometimes called full copy or copy offload). Allows arrays to integrate with vSphere to transparently offload certain storage operations to the array. This integration significantly reduces CPU overhead on the host.)
    • The SCSI Extended Copy command is replaced by the VAAI XCOPY command
    • XCOPY enables the storage array to perform full copies of data completely within the storage array without having to communicate with the ESXi host during the reading and writing of data. This saves the ESXi host from having to perform the read and then write of data. This reduces the time needed to clone VMs or perform Storage vMotion operations.
    • Hardware-accelerated full copy allows storage arrays to make full copies of data completely internal to the array, resulting in significant reduction in storage traffic between the host and the array and reduces the time required to perform operations like cloning VMs or deploying new VMs from templates.
  • Hardware-Accelerated Block Zeroing
    (The block copy & block zeroing primitives used by VM Snapshots, Cloning operations, Storage vMotion and by virtual disks built with the eager zeroed thick option).
    • When a new virtual disk is created with VMFS as an eager-zeroed thick disk, the disk must be formatted and the blocks must be zeroed out before data can be written on them. Block zeroing removes this task from the ESXi host by moving the function down to the storage array with VAAI. This increases the speed of the block zeroing process.
    • Sometimes called Write Same (Zero), this functionality allows storage arrays to zero out large numbers of blocks to provide newly allocated storage without any previously written data. This can speed up operations like creating VMs and formatting virtual disks.
    • Block zeroing speeds up creation of eager-zeroed thick disks and can improve first-time write performance on lazy-zeroed thick disks and on thin disks.
    • Note: “Some storage arrays, on receipt of the WRITE_SAME SCSI command, will write zeroes directly to disk. Other arrays do not need to write zeroes to every location; they simply do a metadata update to write a page of all zeroes.”
  • Thin Provisioning
    (Array Thin Provisioning APIs. Help to monitor space use on thin-provisioned storage arrays to prevent out-of-space conditions, and to perform space reclamation.)
    • vSphere 5.0 added and vSphere 5.5 improves on the ability to reclaim dead space (space no longer used) via the T10 UNMAP command. vSphere also has support for providing advance warning of thin-provisioned out-of-space conditions and provides better handling for true out-of-space conditions.
    • The UNMAP primitive allows ESXi to inform the storage array that space previously occupied by a VM can be reclaimed. This allows an array to correctly report space consumption of a Thin Provisioned datastore, and allows users to correctly monitor and forecast storage requirements.
    • The SCSI UNMAP command, a VAAI primitive, enables an ESXi host to inform the storage array that space can be reclaimed that previously had been occupied by a virtual machine that has been migrated to another datastore or deleted.
    • “Using thin provision UNMAP, ESXi can allow the storage array hardware to reuse no-longer needed blocks. This enables a correlation between what the array reports as free space on a thin-provisioned datastore and what vSphere reports as free space."
"vSphere 5 provides enhanced support for the T10 standards without the need to install a plug-in, enabling vSphere to directly utilize more advanced features of the storage array."

A VAAI plug-in is necessary on storage systems that do natively support the T10 SCSI standard.

To confirm that your hardware does support VAAI and that it is being used, follow the instructions in VMware KB article 1021976.

The Hardware Acceleration status will be one of SUPPORTED, UNSUPPORTED or UNKNOWN in the UI.

If ATS is supported, the status will show SUPPORTED, otherwise if ATS, Block Zeroing or Clone is not supported, VAAI status is UNSUPPORTED, otherwise status is UNKNOWN.

"vSphere 5.5 also includes hardware offloads for NAS:
  • Reserve Space
    • This functionality lets you create thick-provisioned VMDKs on NFS datastores, much like what is possible on VMFS datastores.
  • Full File Clone
    • The Full File Clone functionality allows offline VMDKs to be cloned (copied) by the NAS device.
  • Lazy File Clone
    • This feature allows NAS devices to create native snapshots for the purpose of space-conservative VMDKs for virtual desktop infrastructure (VDI) environments. It's specifically targeted at emulating the Linked Clone functionality vSphere offers on VMFS datastores.
  • Extended Statistics
    • When you're leveraging the Lazy File Clone feature, this feature allows more accurate space reporting."
Reference:

vSphere Storage Terminologies - Protcol Endpoint

Protocol Endpoint

The Protocol Endpoint (PE) provides the ESXi hypervisor access and visibility to the virtual machine objects (VMDK, VMX, swap, etc.) on the VVOL. It acts as an I/O proxy, managing the traffic between the virtual machines and virtual machine objects stored on the VVOL.

Without VVOL, we are limited by how many storage objects (e.g. LUNs) can be directly access. The FC/SCSI protocol specification allows one byte for a LUN number. One byte is equal to 8 bits, written in binary as 28, which represents 256 units of information or in our case, 256 addresses. Therefore with a one byte header field, the FC/SCSI specification allows 256 LUNs (LUN 0 through LUN 255).

Also from the vSphere 6 Configurations Maximum document, vSphere 6 supports up to 256 LUNs per server. Protocol Endpoint provides a workaround these limitations, enabling the addressing of more than 256 virtual machine objects from the storage system.

The protocol endpoint (PE) manages or directs I/O received from the VM enabling scaling across many virtual volumes leveraging multipathing of the PE (inherited by the VVOL’s.). “If using iSCSI, FC, FCoE or other SAN interface, then the PE works on a LUN basis (again not actually storing data), and if using NAS NFS, then with a mount point. Key point is that the PE gets out-of-the-way.”

PE represents IO access point for a Virtual Volume. When a Virtual Volume is created, it is not immediately accessible for IO. To Access Virtual Volume vSphere needs to issue “Bind” operation to a VASA Provider (VP), which creates IO access point for a Virtual Volume on a PE chosen by a VP.

“All paths and policies are administered by Protocol Endpoints. They are compliant with iSCSI and NFS and are intended to replace the concept of LUNs and mount points.”

Protocol Endpoint:
  • Represents the access point from the host to the storage system
  • Is an administrative LUN.
  • Shows up as LUN 256 in vSphere with a size of 512 bytes.
  • Acts as an interface to all the virtual volumes in the array.
  • Enables getting around the 256 LUN limit in vSphere.
  • Are created by storage administrators
  • Protocol Endpoints are compliant with both iSCSI and NFS
  • They are intended to replace the concept of LUNs and mount points
  • Can be mounted or discovered by multiple hosts
  • Are associated with arrays, one per array
  • An array can be associated with multiple PEs
  • For block arrays, PEs will be a special LUN (LUN ID 256)
  • For NFS arrays, PEs are regular mountpoints.
The purpose of the protocol endpoint devices is as an I/O proxy for communication between the virtual machine and the VVOLs. Different storage transports can be used to expose protocol endpoints to ESXi. In the case of block storage a PE represents a proxy LUN, and in the case of the NFS protocol the PE is a share or mount-point.

PE is used to establish a data path from the VMs to their respective VVols on demand.  Once a VVol has been bound via the PE, the I/O goes through the PE to the VVol.

ESXi hosts do not have direct access to virtual volumes; they use Protocol Endpoints to communicate with vVols on the storage side. Consider protocol endpoints as logical I/O demultiplexor or proxies which communicate with vVols on behalf of the ESXi host.  Protocol endpoints establish a data path from the virtual machines to their respective virtual volumes on demand.

Each virtual volume is bound to a protocol endpoint. When a virtual machine on the host performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. A storage system requires one or a very small number of protocol endpoints, depending on the array’s implementation.

A single protocol endpoint can connect hundreds or thousands of virtual volumes.

Protocol endpoints are a part of the physical storage fabric and are exported, along with associated storage containers, by the storage system through a storage provider. After you map a storage container to a virtual datastore, protocol endpoints are discovered by ESXi and become visible in the vSphere Web Client.

Workflow:
The vSphere administrator maps a storage container to a virtual datastore. Then ESXi discovers Protocol Endpoint(s) that correspond to the container and stores them in a database. The protocol endpoints then become visible in the vSphere Web Client.

A single PE can connect to hundreds or thousands of vVols.
“VM sends I/O, PE directs it to vVol.”


PE is analogous to the NFS portmap

Reference:

vSphere Storage Terminologies - T10

T10

"T10 is a Technical Committee of the International Committee on Information Technology Standards (INCITS, pronounced "insights"). INCITS is accredited by, and operates under rules that are approved by, the American National Standards Institute (ANSI). These rules are designed to ensure that voluntary standards are developed by the consensus of industry groups."

Reference:

vSphere Storage Terminologlies - API

API

An application programming interface (API) is code that allows two software programs to communicate with each other.

The API defines the correct way for a developer to write a program that requests services from an operating system (OS) or other application. APIs are implemented by function calls composed of verbs and nouns.The required syntax is described in the documentation of the application being called.

"An API is a software intermediary that makes it possible for application programs to interact with each other and share data.

An API can be made available as a set of libraries, functions, protocols or remote calls available for the programmer or API consumer. It is used when building/enhancing software. It allows the programmer to reuse instead of recreating specific application functions.

Examples of APIs include include files in the C Programming Language, the x86 instruction set, Web services such as Facebook’s Graph API, classes and methods in the Ruby core library, etc.

Reference:

May 31, 2015

vSphere Storage Terminologies - iSCSI

iSCSI

  • Protocol that uses the TCP to create a SAN and transports SCSI traffic over an existing TCP/IP network
  • Facilitates data transfers by carrying SCSI commands over an IP network
  • Presents SCSI targets and devices to iSCSI initiators (requesters)
  • Requires fewer, lower-cost hardware than fibre channel and fibre channel over ethernet
  • The host requires a standard network adapter
  • The datastore format supported on iSCSI storage is VMFS
  • The iSCSI targets use iSCSI names
  • Used mostly on local area networks (LAN)
  • Can also be used on wide area networks (WAN) with the use of tunneling protocols
Internet Small Computer System Interface (iSCSI) packages SCSI storage traffic into the TCP/IP protocol for transmission through standard TCP/IP networks instead of the specialized FC network.

iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.

iSCSI SAN uses a client-server architecture and consists of two types of equipment: initiators and targets. The iSCI initiator known as the client, operates on the host and are data consumers. It initiates iSCSI sessions by issuing SCSI commands and transmitting them, encapsulated into iSCSI protocol, to a server.

The iSCSI target, known as the target, represents a physical storage system on the network. Targets (e.g. disk arrays or tape libraries, are data providers. The iSCSI target responds to the initiator's commands by transmitting required iSCSI data.

ESXi offers two types of iSCSI connections: Software iSCSI and Hardware iSCSI. Hardware iSCSI is further divided into Hardware Dependent iSCSI and Hardware Independent iSCSI. Which option you choose will depend on performance, cost and flexibility considerations. It also depends on whether you want to offload the iSCSI processing to the host (Vmkernel) and whether you want the host to be associated with discovery.

Software iSCSI – host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With this type of iSCSI connection, the host needs only a standard network adapter for network connectivity.

A software iSCSI adapter is VMware code built into the VMkernel. It handles iSCSI processing and enables your host to connect to the iSCSI storage device through standard network adaptors.

"Hardware iSCSI – host connects to storage through a third-party adapter capable of offloading the iSCSI and network processing from the hosts’ CPU. Hardware adapters can be dependent and independent.”

The Dependent Hardware iSCSI Adapter depends on the VMware networking, iSCSI configuration and management interfaces. This type of adapter presents a standard network adaptor and iSCSI offload functionality. The iSCSI offload functionality depends on the host’s network configuration to obtain the IP and MAC addresses, as well as other parameters used for iSCSI sessions.

“The Independent Hardware iSCSI Adapter implements its own networking and iSCSI configuration and management interfaces. An example of an independent hardware iSCSI adapter is a card that presents either iSCSI offload functionality only or iSCSI offload functionality and standard NIC functionality. The iSCSI offload functionality has independent configuration management that assigns the IP address, MAC address, and other parameters used for the iSCSI sessions.”

The choices are:
  • Software iSCSI:
    • Uses standard NICs to connect your host to a remote iSCSI target on the IP network
    • VMkernel networking required
    • The VMkernel provides for the discovery of the LUNs as well as for the TOE.
    • The disadvantage of Software iSCSI is that CPU cycles of the ESXi host is used to manage iSCSI transactions. The VMkernel is doing all of the work.
    • Software initiators allow for options such as bidirectional Challenge Handshake Authentication Protocol (CHAP) and per-target CHAP.
  • Hardware iSCSI
    • Dependent hardware iSCSI initiator:
      • Third-party adapter that depends on VMware networking and iSCSI configuration and management interfaces
        • The specialized iSCSI HBA card provides for the TOE, however discovery of the LUN is done by the VMkernel. Dependent hardware iSCSI takes some (not all) of the work off the VMkernel and CPU of the host.
        • vSwitch VMkernel ports are required for this type of card.
    • Independent hardware iSCSI initiator:
      • Third-party adapter that offloads the iSCSI and network processing and management from your host
      • The specialized NIC card provides for the TOE as well as discovery of the LUN. This completely removes the responsibility from the VMkernel and from the processors on the host
      • vSwitch VMkernel ports are not required for this type of card
Two processes have to take place to create effective iSCSI storage:
  • Discovery: The process of the host finding the iSCSI storage and identifying the LUNs that are presented.
  • TCP offload: The process of deferring some of the management aspects of the TCP connection from the host’s CPU. The device or service that does this is referred to as the TCP Offload Engine (TOE).
A discovery session is part of the iSCSI protocol, and it returns the set of targets you can access on an iSCSI storage system."

The two types of discovery available on ESXi are dynamic and static:
  • Dynamic discovery obtains a list of accessible targets from the iSCSI storage system
  • Static discovery can only try to access one particular target by target name and address
iSCSI node names are globally unique names that do not change when Ethernet adapters or IP addresses change. There are two name formats: Extended Unique Identifier (EUI) and the iSCSI Qualified Name (IQN):
  • EUI name example: eui.02004567A425678D
  • IQN name example: iqn.1998-01. com.vmware:tm-pod04-esx01-6129571c
iSCSI Naming Convention (iSCSI Qualified Name – IQN):
iqnhard-coded string
1998-01.com.vmwarethis is the date domain was registered
tm-pod04-esx01-6129571cthe name itself (can be changed to a “friendly name”)

Reference:

vSphere Storage Terminologies - Boot From SAN (BfS)

Boot From SAN (BFS)

"Traditionally, servers are configured to install the operating system on internal direct-attached storage devices. With external booting from HBAs or RAID arrays, server-based internal boot devices can be eliminated. Booting from an external device provides high-availability features for the operating system during the boot process by configuring the HBA BIOS with redundant boot paths"


Booting from the SAN provides:
  • Redundant storage paths
  • Disaster recovery
  • Improved security
  • Minimized server maintenance
  • Reduced impact on production servers
  • Reduced backup time
References

vSphere Storage Terminologies - WWN

WWN

"All the objects (initiators, targets, and LUNs) on a Fibre Channel SAN are identified by a unique 64-bit identifier called a worldwide name (WWN). WWNs can be worldwide port names (a port on a switch) or node names (a port on an endpoint). For anyone unfamiliar with Fibre Channel, this concept is simple. It's the same technique as Media Access Control (MAC) addresses on Ethernet."

An example CNA has the following worldwide node name: worldwide port name (WWnN: WWpN) in the identifier column:
20:00:00:25:b5:10:00:2c 20:00:00:25:b5:a0:01:2f
"Like Ethernet MAC addresses, WWNs have a structure. The most significant two bytes are used by the vendor (the four hexadecimal characters starting on the left) and are unique to the vendor, so there is a pattern for QLogic or Emulex HBAs or array vendors. In the previous example, these are Cisco CNAs connected to an EMC VNX storage array."

WWNN

World Wide Node Name (WWNN), is a World Wide Name assigned to a node (an endpoint, a device) in a Fibre Channel fabric.

"With a single port HBA you have one WWN/WWNN (World Wide Node Name), and a single WWPN, and for multi-port cards you will still have a single WWN/WWNN"

Here is an example of single port HBA cards. Notice the WWNN, e.g. 20:00:00:00:C9:9F:0F:3A and the corresponding WWPN, 10:00:00:00:C9:9F:0F:3A


Here is an example of a dual port HBA card. Notice the single WWNN, e.g. 50:01:43:80:01:3B:DA:E0 and the two corresponding WWPNs, 50:01:43:80:01:3B:DA:E8 and 50:01:43:80:01:3B:DA:EC.

WWPN

A WWPN (world wide port name) is the unique identifier for a fibre channel port. It is a World Wide Name assigned to a port in a Fibre Channel fabric. It is functionally equivalent to the MAC address in Ethernet protocol.

In a single port HBA there is one WWNN and one WWPN.
For a dual port HBA, there is one WWNN and two WWPNs.

A WWN (world wide name) is a unique identifier for the node itself.

Here is an example of single port HBA cards.
Notice the WWNN, e.g. 20:00:00:00:C9:9F:0F:3A and the corresponding WWPN, 10:00:00:00:C9:9F:0F:3A

Here is an example of a dual port HBA card.
Notice the single WWNN, e.g. 50:01:43:80:01:3B:DA:E0 and the two corresponding WWPNs, 50:01:43:80:01:3B:DA:E8 and 50:01:43:80:01:3B:DA:EC

Reference:

vSphere Storage Terminologies - Naming Convention

Naming Convention

"The naming convention that vSphere uses to identify a physical storage location that resides on a local disk or on a SAN consists of several components.

The following are the three most common naming conventions for local and SAN and a brief description of each:
  • Runtime name
    • A runtime name is created by the host and is only relative to the installed adapters at the time of creation; it might be changed if adapters are added or replaced and the host is restarted.
    • Uses the convention vmhbaN:C:T:L, where:
      • vm stands for VMkernel
      • hbais host bus adapter
      • N is a number corresponding to the host bus adapter location (starting with 0)
      • C is channel, and the first connection is always 0 in relation to vSphere. (An adapter that supports multiple connections will have different channel numbers for each connection.)
      • T is target, which is a storage adapter on the SAN or local device
      • L is logical unit number(LUN)
  • Canonical name:  The Network Address Authority (NAA) ID that is a unique identifier for the LUN. This name is guaranteed to be persistent even if adapters are added or changed and the system is rebooted
  • SCSI ID:  The unique SCSI identifier that signifies the exact disk or disks that are associated with a LUN"

vSphere Storage Terminologies - FCoE

FCoE

The Fibre Channel over Ethernet (FCoE) protocol encapsulates Fibre Channel (FC) frames into Ethernet frames. With FCoE network (IP) and storage (SAN) data traffic can be consolidated using a single network and the ESXi host can use an existing 10 Gbit (or higher) lossless Enhanced Ethernet to deliver Fibre Channel traffic.

FCoE makes it possible to move Fibre Channel traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface.

FCoE:
  • Is a storage protocol
  • Allows FC to use high speed IEEE 802.3 networks while preserving FC protocol
  • Is part of the International Committee for Information Technology Standards T11 FC-BB-5 standard
  • Operates directly above Ethernet in the network protocol stack
  • Is not routable at the IP layer, and will not work across routed IP networks
  • Is targeted at the enterprise user
  • ESXi supports a maximum of four software FCoE adapters on one host
  • In Hardware FCoE mode uses a specialized type of adapter called a converged network adapter (CNA)
  • In Software FcoE mode, uses supported network adapter connected to a VMkernel port to be used for Fibre Channel
The main application of FCoE is in data center storage area networks (SANs).

vSphere supports two categories of FCoE adapters:
  • hardware FCoE adapters, a converged network adapter (CNA)
  • software FCoE adapters that use the native FCoE stack in ESXi.
Hardware FCoE adapter category includes completely offloaded specialized Converged Network Adapters (CNAs) that contain network and Fibre Channel functionalities on the same card. The main advantage is the ability to consolidate networking and storage to the Ethernet network. You do not have to support a separate Fibre Channel fabric and Ethernet network.

Software FCoE adapter uses the native FCoE protocol stack in ESXi for the protocol processing. The software FCoE adapter is used with a NIC that offers Data Center Bridging (DCB) and I/O offload capabilities. This adapter allows you to access LUNs over FCoE without needing a dedicated HBA or third party FCoE drivers installed on the ESXi host.

In the vSphere web client, the networking component appears as a standard network adapter (vmnic) and the Fibre Channel component as a FCoE adapter (vmhba).

Fibre Channel over Ethernet (FCoE) is an encapsulation of Fibre Channel frames so they can be sent over Ethernet networks.
Converged Enhanced Ethernet (CEE) also known as Enhanced Ethernet, Data Center Ethernet or Data Center Bridging (DCB), eliminates Layer 3 TCP/IP protocols in favor of native Layer 2 Ethernet.

Reference:

vSphere Storage Terminologies - Fibre Channel

Fibre Channel

Fibre Channel (FC):
  • The host requires a host bus adapter (HBA)
  • The datastore format supported on FC storage is VMFS
  • Fibre Channel targets use World Wide Names (WWNs)
  • Connection speed up to 16 Gbps in vSphere
  • Fibre Channel has a lower overhead than TCP/IP
  • An ANSI X3.230-1994 and ISO 14165-1 standard
Fibre Channel is a technology for transmitting data primarily on the storage-area networking (SAN). Fibre Channel traffic can be used on fiber-optic, twisted pair copper, or coaxial cable.

“Fibre Channel allows for an active intelligent interconnection scheme, called a Fabric, to connect devices. All a Fibre channel port has to do is to manage a simple point-to-point connection between itself and the Fabric.”

Fibre Channel Value
LUNs per host 256
LUN size 64 TB
LUN ID 1023
Number of paths to a LUN 32
Number of total paths on a server 1024
Number of HBAs of any type 8
HBA ports 16
Targets per HBA 256
Table 1 - Storage Maximums

Reference:

vSphere Storage Terminologies - HBA

HBA

A Host Bus Adapter (HBA) is an I/O adapter that provides physical connectivity between a host (e.g. ESXi host) and a storage device or network. HBAs also handle the task of processing the I/O, offloading the work from the host computer. Similar in concept to a network interface card, the HBA on the host connects to a target port on the storage layer.

In the context of the storage area network (SAN), the host-side HBA is called an initiator and the storage-side port is the target.

HBAs are commonly used to trasmit Fibre Channel (FC) traffic. They are also used to connect SCSI, iSCI and SATA devices. Converged network adapters (CNA), combine the functionality of an FC HBA and TCP/IP Ethernet network interface card (NIC).

There are three types of HBA you can use on an ESXi host: Ethernet (iSCSI), Fibre Channel, and Fibre Channel over Ethernet (FCoE). Note: A FCoE HBA is technically a converged network adapter (CNA) or a NIC with FCoE support (software FCoE).

In addition to the hardware adapters, software versions of the iSCSI and FCoE adapters are available. The maximum number of HBAs of any type that can be installed in an ESXi host is 8. In addition, the maximum number of HBA ports per ESXi host is 16.

An HBA manages the transfer of information between the host computer and a storage device using either the SCSI or Fibre Channel protocol.

There are four main types of storage adapters:
  •  Fibre Channel
  •  Fibre Channel over Ethernet (FCoE)
  •  iSCSI
  • Network-attached storage (NAS)
Also
HBA and Virtualization

With virtualization, a single host can support multiple virtual machines (and guest operating systems). If that host has only one or a few HBA cards, the challenge becomes how to allow the many applications running on that host to share the limited HBA resource… cue NPIV or N_Port ID Virtualization.

NPIV is a Fibre Channel technology that enables the creation of multiple logical/virtual ports (and port IDs) from a single physical port (and port ID). With NPIV each virtual machine on a host can be assigned its own virtual port ID. It effectively virtualizes the HBA resource.

NPIV defines how multiple virtual machines can share a single physical port ID.

Reference:

vSphere Storage Terminologies - NAS

NAS

Network-attached Storage (NAS)


"Network-attached storage (NAS) is file-level data storage provided by a computer that is specialized to provide not only the data but also the file system for the data."

NAS
  • An NFS client is built into the ESXi host
  • NFS client uses Network File System (NFS) protocol to communicate with the NAS/NFS servers
  • The host requires a standard network adapter
  • vSphere support the NFS v3 and v4.1 over TCP on NAS
  • The datastore format supported on NFS storage is VMFS