Skip to main content

Posts

Showing posts from May 24, 2015

vSphere Storage Terminologies - RAID

RAID RAID - Redundant Array of Independent Disks "In spite of the technological wonder of hard disks, they do fail—and fail predictably,. RAID schemes address this by leveraging multiple disks together and using copies of data to support I/O until the drive can be replaced and the RAID protection can be rebuilt. Each RAID configuration tends to have different performance characteristics and different capacity overhead impact." The goal of RAID is to increase disk performance, disk redundancy or both. "The performance increase is a function of striping: data is spread across multiple disks to allow reads and writes to use all the disks' IO queues simultaneously.” "It really is a technological miracle that magnetic disks work at all. What a disk does all day long is analogous to a pilot flying a 747 at 600 miles per hour 6 inches off the ground and reading pages in a book while doing it!" 0 RAID-0 (Striping with no parity) (S

vSphere Storage Terminologies - Identifiers

Identifiers The following are definitions for some LUN identifiers and their conventions: naa.<NAA>:<Partition> eui.<EUI>:<Partition> NAA or EUI NAA stands for Network Addressing Authority identifier. EUI stands for Extended Unique Identifier. The number is guaranteed to be unique to that LUN. The NAA or EUI identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same. The <Partition> represents the partition number on the LUN or Disk. If the <Partition> is specified as 0, it identifies the entire disk instead of only one partition. This identifier is generally used for operations with utilities such as vmkfstools. Example: naa.6090a038f0cd4e5bdaa8248e6856d4fe:3 = Partition 3 of LUN naa.6090a038f0cd4e5bdaa8248e6856d4fe. MPX mpx.vm

vSphere Storage Terminologies - LUN

LUN LUN – Logical Unit Number – A single block storage allocation presented to a server. When a host scans the SAN device and finds a block device resource (LUN/disk), it assigns it a unique identifier, the logical unit number. The term disk is often used interchangeably with LUN . From the perspective of an ESX host, a LUN is a single unique raw storage block device or disk. “Though not technically correct, the term LUN is often also used to refer to the logical disk itself.” In a SAN, storage is allocated in manageable chunks, typically at the logical unit (LUN) level.  These “logical units” are then presented to servers as disk volumes. A logical unit number (LUN), is a number used to identify a logical unit, which is a device addressed by the SCSI protocol or a Storage Area Network protocol which encapsulate SCSI, such as Fibre Channel or iSCSI. "To provide a practical example, a typical multi-disk drive has multiple physical SCSI ports, each with o

vSphere Storage Terminologies - Local vs. Shared Storage

Local vs. shared storage “An ESXi host can have one or more storage options actively configured, including the following:” Local SAS/SATA/SCSI storage Fibre Channel Fibre Channel over Ethernet (FCoE) iSCSI using software and hardware initiators NAS (specifically, NFS) InfiniBand Many advanced vSphere features, vMotion, high availability (HA), distributed resource scheduler (DRS), fault tolerance (FT), and etc. required shared storage. Local storage has limited use in a vSphere environment. With vSphere 5.0, VMware introduced vSphere Storage Appliance (VSA). VSA provides a way to take local storage and present it to ESXi hosts as a shared NFS mount . This is implemented through the installation of a virtual appliance called the vSphere Storage Appliance. VSA enables provides failover capabilities for VMs, without requiring shared SAN storage. There are some limitations however. It can be configured with only two or three hosts, there are strict rules around the

vSphere Storage Terminologies - RDM

RDM - Raw Device Mapping (RDM) Raw device mapping ( RDM ) provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem (Fibre Channel, iSCSI or Fibre Channel over Ethernet). An RDM LUN does not come with a file system, e.g. VMFS. However, it can be formatted with any file system, such as NTFS for Windows virtual machines. “Consider the RDM a symbolic link from a VMFS volume to a raw volume.” A mapping file is located on a VMFS datastore and points to the raw LUN/volume. The mapping file acts as a proxy for the physical device (raw LUN) and contains metadata used for managing and redirecting access to the raw LUN. A virtual machine reads the mapping file, obtains the location of the raw LUN, then sends its read and write requests directly to the raw LUN, bypassing the hypervisor. The mapping makes volumes appear as files in a VMFS volume. RDM configuration consists of: Mapping file Is a proxy or symbol link Reside

vSphere Storage Terminologies - Virtual Disk Modes

Virtual Disk Modes ESXi supports three virtual disk modes: Independent persistent, Independent nonpersistent, and Dependent. An independent disk does not participate in virtual machine snapshots. That is, the disk state will be independent of the snapshot state; creating, consolidating, or reverting to snapshots will have no effect on the disk. Independent persistent In this mode changes are persistently written to the disk, providing the best performance. Independent nonpersistent In this mode disk writes are appended to a redo log. The redo log is erased when you power off the virtual machine or revert to a snapshot, causing any changes made to the disk to be discarded. When a virtual machine reads from an independent nonpersistent mode disk, ESXi first checks the redo log (by looking at a directory of disk blocks contained in the redo log) and, if the relevant blocks are listed, reads that information. Otherwise, the read goes to the base disk for the vi

vSphere Storage Terminologies - Storage Protocols

Storage Protocols "Storage Protocols are a method to get data from a host or server to a storage device." Local Block Storage Protocols Network Block Storage Protocols Network File Storage Protocols SCSI Fibre Channel SMB/CIFS SAS iSCSI NFS SATA FCoE FTP ATA AoE AFP Fibre Channel Block storage Protocol for transporting SCSI commands over FC networks Uses FC HBA Good performance, low latency and high reliability Costly, complex, specialized equipment required FCoE - FC over traditional Ethernet components at 10GbE  iSCSI Block storage, uses traditional Ethernet network components Uses initiators (hardware/software) to send SCSI commands to targets Software initiators use traditional NICs (higher host CPU overhead) Hardware initiators use special NICs with TOEs Reduced cost and complexity, no special training needed May not scale as far as

vSphere Storage Terminologies - Datastore Cluster

Datastore Cluster "A datastore cluster is a collection of datastores aggregated into a single unit of management and consumption.” Storage DRS works on the datastore cluster to manage storage resources in a manner similar to how vSphere DRS manages compute resources within a cluster. Using Storage DRS, capacity and I/O latency is balanced across the datastores in the datastore cluster. Storage DRS also automatically evacuates virtual machines from a datastore when placed in storage maintenance mode. "A grouping of multiple datastores, into a single, flexible pool of storage called a Datastore Cluster.” Datastore clusters allow an administrator to dynamically add and remove datastores (array LUNs) from a datastore cluster object. Once created, the administrator selects the datastore cluster object to operate on instead of the individual datastores in the cluster. E.g. to select a location for a VM’s files, the administrator would choose the datastore c

vSphere Storage Terminologies - Zeroing

Zeroing Zeroing is the process whereby disk blocks are overwritten with zeroes to ensure that no prior data is leaked into the new VMDK that is allocated with these blocks. Zeoring in the ESXi file system (VMFS) can happen at the time a virtual disk is created (create-time) or on the first write to a VMFS block (run-time). Reference: Performance Study of VMware vStorage Thin Provisioning

vSphere Storage Terminologies - VMDK

VMDK V irtual M achine D is K (VMDK) - A VMware vSphere virtual disk is labeled the VMDK file. The VMDK file encapsulates the contents of an operating system filesystem, e.g. the C Drive of a Microsoft Windows OS or the root file system (/) on a Linux/UNIX file system. The VMDK file is stored on a VMFS or NFS datastore or a virtual volume. " Virtual disk files are stored on dedicated storage space on a variety of physical storage systems, including internal and external devices of a host, or networked storage, dedicated to the specific tasks of storing and protecting data. " The VMware virtual machine disk has the .vmdk file name extension. Virtual Disk Formats: VMware vSphere – Virtual Machine Disk – VMDK Citrix XenServer – Virtual Hard Disk – VHD Microsoft Hyper-V – Virtual Hard Disk – VHD RedHat KVM – supports raw images, qcow2, VMDK, and others † Raw – raw image (.img, .raw, etc.) ‡ † KVM inherits disk formats support from QEMU

vSphere Storage Terminologies - NAS

NAS - Network Attached Storage Network-attached storage (NAS) is file-level data storage provided by a computer specialized to provide data and the file system for the data. NAS An NFS client is built into the ESXi host NFS client uses Network File System (NFS) protocol version 3 to communicate with the NAS/NFS servers The host requires a standard network adapter vSphere support the NFS v3 over TCP on NAS The datastore format supported on NFS storage is VMFS ESXi supports either NFS v3 or NFS v4.1. ESXi does not impose any limits on the NFS datastore size. Note: From ESXi 6 onward, NFS v3 and NFS v4.1 shares/datastores can coexist on the same host. However each datastore can only be mounted as either v3 OR v4.1, not both as they each use different locking mechanisms: propriety client side co-operative locking vs. server-side locking respectively. An NFS v4.1 datastore interoperates with vSphere features such as vMotion, DRS (dynamic resource scheduler), HA (high av

vSphere Storage Terminologies - VMFS

VMFS - Virtual Machine File System Fibre Channel (FC), Fibre Channel over Ethernet (FcoE), and iSCSI are block-based storage protocols. To enable file level control, VMware created a “clustered” file system it called V irtual M achine F ile S ystem ( VMFS ). VMFS – the VMware clustered file system, allows read/write access to storage resources by several ESXi host servers simultaneously. It is optimized for clustered virtual environments and the storage of large files. The structure of VMFS makes it possible to store VM files in a single folder, simplifying VM administration. At vSphere 6.0, VMFS has a limit of 64 concurrent hosts accessing the same file system. Each host can connect to 256 individual VMFS volumes. A datastore is a logical container that holds virtual machine files and other files necessary for virtual machine operations. A datastore can be VMFS-based, NFS-based or a virtual volume. VMFS Virtual machine file system (VMFS) Exclusive to V

vSphere Storage Terminologies - Datastore

vSphere Storage Terminologies - Datastore Datastores are logical containers, analogous to file systems. They hold virtual machine objects such as virtual disk files snapshot files, and other files necessary for virtual machine operation. They can exist on a variety of physical storage types and are accessed over different storage adapters (SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Ethernet.). Datastores hide specifics of each storage device and provide a uniform model for storing virtual machine files. Ref: http://www.vmware.com/files/pdf/vmfs_resig.pdf A datastore can be of the following types: VMFS, NFS, Virtual SAN and Virtual Volume (VVOL). A Virtual SAN datastore “leverages storage resources from a number of ESXi hosts, which are part of a Virtual SAN cluster. The Virtual SAN datastore is used for virtual machine placement, and supports VMware features that require shared storage, such as HA, vMotion, and DRS." A

vSphere Storage Terminologies - Virtual Disk

vSphere Storage Terminologies - Virtual Disk A virtual machine consists of several files that are stored on a storage device. The key files are the configuration file ( <vm_name>.vmx ), virtual disk file ( <vm_name>-flat.vmdk ), virtual disk descriptor file ( <vm_name>.vmdk ), NVRAM setting file ( <vm_name>.nvram ), and log files ( vmware.log ). You define virtual machine settings using any of the following: vSphere Web Client local or remote command-line interfaces (e.g. PowerCLI, vCLI, ESXi Shell) vSphere Web Services SDK –  facilitates development of client applications that leverage the vSphere API A virtual machine uses a virtual disk to store its operating system, program files, and other data associated with its activities. A virtual disk is a large physical file, or a set of files, that can be copied, moved, archived, and backed up as easily as any other file. You can configure virtual machines with multiple virtual disks. A virtua

vSphere Storage - Hosts, datastores and protocols

vSphere Storage - Hosts, datastores and protocols Storage/SAN Lifecycle: Configure array/SAN for use with vSphere Create LUN Present LUN to ESXi host Create VMFS (or NFS) datastore Format datastore: thin, lazy-zeroed thick, eager-zeroed thick Create and store media and virtual machines files on datastore See also: VCP5-DCV Official  Certification Guide Storage Protocol Comparison  White Paper