Skip to main content

Posts

Showing posts from May 31, 2015

vSphere Storage Terminologies - vFRC - vSpere Flash Read Cache

vSphere Flash Read Cache (vFRC) vSphere Flash Read Cache enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource. Using vSphere Flash Read Cache, solid-state caching space can be assigned to VMs in much the same way as CPU cores, RAM, or network connectivity is assigned to VMs. vFRC Introduced at vSphere 5.5 Reduces latency Leverage local SSD (solid-state drive) or PCIe flash device to create a “Flash Pool” for VMs Locally caches virtual machine read I/O on an ESXi host Offloads I/O from the SAN to the local SSD Enables virtualization of previously difficult to virtualize high I/O intensive applications Requires Enterprise Plus licensing Flash Pool is managed as a resource similar to CPU and storage Does not cache writes, does only “write-through” caching vFRC components include the Flash Pool and the Flash Cache Module: Flash Pool: “Virtual Flash Cache” A pool of SSD or PCIe fla

vSphere Storage Terminologies - IOPS

IOPS Performance is represented in storage products by three statistics: throughput, latency and IOPS. Throughput is the speed of the data transfer into or out of the storage device. It is a measure of the amount of data that can be pushed through a point in the data path in a given amount of time. Expressed as bytes (kilobytes or megabytes) per second in a storage environment The higher the value, the better Throughput – a measure of the data transfer rate, or I/O throughput, measured in bytes per second or MegaBytes per second (MBPS). Latency is a measure of how long it takes for an IO transaction to begin from the requesting application’s viewpoint. Measured in fractions of a second The smaller the latency number, the better Latency – a measure of the time taken to complete an I/O request, also known as response time. This is frequently measured in milliseconds (one thousandth of a second). IOPS is a measure of the number of storage transactions process

vSphere Storage Terminologies - NPIV

NPIV With Fibre Channel (FC), nodes connect to each other via FC ports in order to exchange information. The following are some of the ports available on a FC network: N_Port: This is a port on the Fibre Channel fabric. It is an end node port, used to connect a node to the fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage device. N_Ports connect to the FC switch using an F_Port F_Port: This is a port on a Fibre Channel switch that is connected to an N_Port. It is used to connect an N_Port point-to-point to a switch. This port makes use of the 24-bit port address. F_Port is the port into which a server’s HBA or a storage array’s target port is connected E_Port: This is an expansion (or extension) port. It connects switches. Switches connect to one another using an inter-switch link (ISL). The ports at the end of the ISL are E_Ports. The connection between two E_Ports forms an Inter-Switch Link (ISL). Other ports de

vSphere Storage Terminologies - Storage Containers

Storage Containers For management purposes, storage systems group virtual volumes (VVOL) into storage containers. Storage containers can be used to isolate or logically partition virtual machines with diverse needs and requirements. “ Storage container is logical abstraction to which Virtual Volumes are mapped. vSphere will map storage container to datastore (of type Virtual Volume) and provide applicable datastore level functionality. ” Storage containers are setup on the storage side by the storage administrator, who controls their size and number. Unlike with a physical LUN, the size of a storage container can be extended and storage containers do not need a file system. Another advantage of storage containers is that it allows storage capabilities to be applied on a per-VM basis instead of on a per-LUN basis. As a result the different VMs in a Storage Container can have different storage capabilities. There is a limit of 256 storage containers per

vSphere Storage Terminologies - VVOL

VVols "The VVols architecture is part of the VMware VASA 2.0 specification, which defines a new architecture for VM-level storage array abstraction. VASA 2.0 includes new interfaces to query storage policies to enable VMware’s Storage Policy Based Management (SPBM) to make intelligent decisions about virtual disk placement and compliance." " VMware Virtual Volumes (VVols) provides VM-level granularity by introducing a 1:1 mapping of VM objects to storage volumes  and supports policy-based management to simplify storage management in virtualized server environments. ” Prior to VVols, storage arrays integrated with vSphere at the datastore level using VMware’s Virtual Machine File System (VMFS). With Vvols, there is no need for a file system and going forward, users can choose to use VMFS or VVols (or both). VVOL: Virtualizes SAN and NAS devices Virtual disks are natively represented on arrays Enables VM granular storage operations

vSphere Storage Terminologies - VASA

VASA vSphere Storage APIs - Storage Awareness (VASA) is one of a family of APIs used by third-party hardware, software, and storage providers to develop components that enable storage arrays to expose the array’s capabilities, configurations, health and events to the vCenter Server. The following are the Storage APIs in this family: Storage APIs - Multipathing, also known as the Pluggable Storage Architecture (PSA) Storage APIs - Array Integration, formerly known as VAAI Storage APIs - Storage Awareness Storage APIs - Data Protection Storage APIs - Site Recovery Manager You will find Storage APIs - Storage Awareness referred to by other names, e.g. vStorage APIs for Storage Awareness, VASA vSphere APIs for Storage Awareness It enables more advanced out-of-band communication between storage arrays and the virtualization layer. Storage APIs - Storage Awareness (VASA). This is an example of a vCenter Server-based API. It enables storage

vSphere Storage Terminologies - VAAI

VAAI The vSphere Storage APIs - Array Integration , formerly/commonly known as VAAI is a feature introduced in ESXi/ESX 4.1 that provides hardware acceleration functionality. Storage APIs - Array Integration (VAAI) enables VMware hypervisor to offload specific virtual machine and storage management operations to storage hardware. With assistance from the storage hardware, the host performs storage operations faster and consumes less CPU, memory, and storage fabric bandwidth. " In the vSphere 5.x and later releases, the ESXi extensions (referred to as Storage APIs - Array Integration) are implemented as the T10 SCSI based commands. As a result, with the devices that support the T10 SCSI standard, ESXi host can communicate directly with storage hardware and does not require the VAAI plug-ins. " Without T10 SCSI support, ESXi reverts to using the VAAI plug-ins, installed on your host.The VAAI plug-ins are vendor-specific and are provided either by VMware or

vSphere Storage Terminologies - Protcol Endpoint

Protocol Endpoint The Protocol Endpoint (PE) provides the ESXi hypervisor access and visibility to the virtual machine objects (VMDK, VMX, swap, etc.) on the VVOL. It acts as an I/O proxy, managing the traffic between the virtual machines and virtual machine objects stored on the VVOL. Without VVOL, we are limited by how many storage objects (e.g. LUNs) can be directly access. The FC/SCSI protocol specification allows one byte for a LUN number. One byte is equal to 8 bits, written in binary as 28, which represents 256 units of information or in our case, 256 addresses. Therefore with a one byte header field, the FC/SCSI specification allows 256 LUNs (LUN 0 through LUN 255). Also from the vSphere 6 Configurations Maximum document, vSphere 6 supports up to 256 LUNs per server. Protocol Endpoint provides a workaround these limitations, enabling the addressing of more than 256 virtual machine objects from the storage system. The protocol endpoint (PE) manages or directs

vSphere Storage Terminologies - T10

T10 "T10 is a Technical Committee of the International Committee on Information Technology Standards (INCITS, pronounced "insights"). INCITS is accredited by, and operates under rules that are approved by, the American National Standards Institute (ANSI). These rules are designed to ensure that voluntary standards are developed by the consensus of industry groups." Reference: About SCSI — T10 & Specifications

vSphere Storage Terminologlies - API

API An application programming interface (API) is code that allows two software programs to communicate with each other. The API defines the correct way for a developer to write a program that requests services from an operating system (OS) or other application. APIs are implemented by function calls composed of verbs and nouns.The required syntax is described in the documentation of the application being called. " An API is a software intermediary that makes it possible for application programs to interact with each other and share data. ” An API can be made available as a set of libraries, functions, protocols or remote calls available for the programmer or API consumer. It is used when building/enhancing software. It allows the programmer to reuse instead of recreating specific application functions. Examples of APIs include include files in the C Programming Language, the x86 instruction set, Web services such as Facebook’s Graph API, classes and method

vSphere Storage Terminologies - iSCSI

iSCSI Protocol that uses the TCP to create a SAN and transports SCSI traffic over an existing TCP/IP network Facilitates data transfers by carrying SCSI commands over an IP network Presents SCSI targets and devices to iSCSI initiators (requesters) Requires fewer, lower-cost hardware than fibre channel and fibre channel over ethernet The host requires a standard network adapter The datastore format supported on iSCSI storage is VMFS The iSCSI targets use iSCSI names Used mostly on local area networks (LAN) Can also be used on wide area networks (WAN) with the use of tunneling protocols Internet Small Computer System Interface (iSCSI) packages SCSI storage traffic into the TCP/IP protocol for transmission through standard TCP/IP networks instead of the specialized FC network. “ iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over l

vSphere Storage Terminologies - Boot From SAN (BfS)

Boot From SAN (BFS) "Traditionally, servers are configured to install the operating system on internal direct-attached storage devices. With external booting from HBAs or RAID arrays, server-based internal boot devices can be eliminated. Booting from an external device provides high-availability features for the operating system during the boot process by configuring the HBA BIOS with redundant boot paths" Booting from the SAN provides: Redundant storage paths Disaster recovery Improved security Minimized server maintenance Reduced impact on production servers Reduced backup time References HP Boot from SAN Configuration Guide

vSphere Storage Terminologies - WWN

WWN "All the objects (initiators, targets, and LUNs) on a Fibre Channel SAN are identified by a unique 64-bit identifier called a worldwide name (WWN). WWNs can be worldwide port names (a port on a switch) or node names (a port on an endpoint). For anyone unfamiliar with Fibre Channel, this concept is simple. It's the same technique as Media Access Control (MAC) addresses on Ethernet." An example CNA has the following worldwide node name: worldwide port name (WWnN: WWpN) in the identifier column: 20:00:00:25:b5:10:00:2c 20:00:00:25:b5:a0:01:2f "Like Ethernet MAC addresses, WWNs have a structure. The most significant two bytes are used by the vendor (the four hexadecimal characters starting on the left) and are unique to the vendor, so there is a pattern for QLogic or Emulex HBAs or array vendors. In the previous example, these are Cisco CNAs connected to an EMC VNX storage array." WWNN World Wide Node Name (WWNN), is a World Wide Name as

vSphere Storage Terminologies - Naming Convention

Naming Convention "The naming convention that vSphere uses to identify a physical storage location that resides on a local disk or on a SAN consists of several components. The following are the three most common naming conventions for local and SAN and a brief description of each: Runtime name A runtime name is created by the host and is only relative to the installed adapters at the time of creation; it might be changed if adapters are added or replaced and the host is restarted. Uses the convention vm hba N : C : T : L , where: vm stands for VMkernel hba is host bus adapter N is a number corresponding to the host bus adapter location (starting with 0) C is channel, and the first connection is always 0 in relation to vSphere. (An adapter that supports multiple connections will have different channel numbers for each connection.) T is target, which is a storage adapter on the SAN or local device L is logical unit number(LUN) Canonical name : 

vSphere Storage Terminologies - FCoE

FCoE The Fibre Channel over Ethernet (FCoE) protocol encapsulates Fibre Channel (FC) frames into Ethernet frames. With FCoE network (IP) and storage (SAN) data traffic can be consolidated using a single network and the ESXi host can use an existing 10 Gbit (or higher) lossless Enhanced Ethernet to deliver Fibre Channel traffic. “ FCoE makes it possible to move Fibre Channel traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface. ” FCoE: Is a storage protocol Allows FC to use high speed IEEE 802.3 networks while preserving FC protocol Is part of the International Committee for Information Technology Standards T11 FC-BB-5 standard Operates directly above Ethernet in the network protocol stack Is not routable at the IP layer, and will not work across routed IP networks Is targeted at the enterprise user ESXi supports a maximum of four software FCoE adapters on one host In

vSphere Storage Terminologies - Fibre Channel

Fibre Channel Fibre Channel (FC): The host requires a host bus adapter (HBA) The datastore format supported on FC storage is VMFS Fibre Channel targets use World Wide Names (WWNs) Connection speed up to 16 Gbps in vSphere Fibre Channel has a lower overhead than TCP/IP An ANSI X3.230-1994 and ISO 14165-1 standard Fibre Channel is a technology for transmitting data primarily on the storage-area networking (SAN). Fibre Channel traffic can be used on fiber-optic, twisted pair copper, or coaxial cable. “Fibre Channel allows for an active intelligent interconnection scheme, called a Fabric, to connect devices. All a Fibre channel port has to do is to manage a simple point-to-point connection between itself and the Fabric.” Fibre Channel Value LUNs per host 256 LUN size 64 TB LUN ID 1023 Number of paths to a LUN 32 Number of total paths on a server 1024 Number

vSphere Storage Terminologies - HBA

HBA A Host Bus Adapter (HBA) is an I/O adapter that provides physical connectivity between a host (e.g. ESXi host) and a storage device or network. HBAs also handle the task of processing the I/O, offloading the work from the host computer. Similar in concept to a network interface card, the HBA on the host connects to a target port on the storage layer. In the context of the storage area network (SAN), the host-side HBA is called an initiator and the storage-side port is the target. HBAs are commonly used to trasmit Fibre Channel (FC) traffic. They are also used to connect SCSI, iSCI and SATA devices. Converged network adapters (CNA), combine the functionality of an FC HBA and TCP/IP Ethernet network interface card (NIC). There are three types of HBA you can use on an ESXi host: Ethernet (iSCSI), Fibre Channel, and Fibre Channel over Ethernet (FCoE). Note: A FCoE HBA is technically a converged network adapter (CNA) or a NIC with FCoE support (software FCoE). In ad

vSphere Storage Terminologies - NAS

NAS Network-attached Storage (NAS) " Network-attached storage (NAS) is file-level data storage provided by a computer that is specialized to provide not only the data but also the file system for the data. " NAS An NFS client is built into the ESXi host NFS client uses Network File System (NFS) protocol to communicate with the NAS/NFS servers The host requires a standard network adapter vSphere support the NFS v3 and v4.1 over TCP on NAS The datastore format supported on NFS storage is VMFS