Skip to main content

Posts

Showing posts from June 7, 2015

VSAN - Virtual SAN

VSAN “ VMware Virtual SAN abstracts and pools server-side flash and disk into shared pools of capacity with policy-based management and application-centric data services. ” Virtual SAN pools server-attached hard disk drives and flash (SSDs, and PCI-e) drives to create a distributed shared datastore that abstracts the storage hardware and provides a Software-Defined Storage (SDS) tier for virtual machines. VSAN leverages local storage from a number of ESXi hosts in a cluster and creates a distributed shared datastore. This shared datastore can be used for VM placement and core vSphere technologies such as vMotion, DRS, VMware Site Recovery Manager, etc. “VSAN leverages the power of any solid state drives (SSDs) on the hosts in the cluster for read caching and write buffering to improve performance." In a hybrid architecture VSAN has both Flash (SSDs and PCI-e) and hard disk drive (HDD) devices. The Flash devices are utilized as a read cache and

VSA - vSphere Storage Appliance

VSA "Shared Storage for Everyone" The vSphere Storage Appliance (VSA) allows local storage on ESXi hosts to be used as shared storage, enabling the use of many storage-dependent virtualization features, such as vMotion, distributed resource scheduling (DRS), and high availability (HA), without the need for a SAN. VSA also functions as a storage cluster, providing continued availability of all the data it stores even if one node in the cluster fails. The VSA is a VM on each host in the cluster. If one host fails, the VSA fails over automatically to one of the other hosts. vSphere Storage Appliance (VSA) is VMware software that transforms existing, local server storage into shared storage that can be shared by up to three vSphere hosts. vSphere Storage Appliance VMware Virtual SAN Description Low cost, simple shared storage for small deployments Scale-out distributed storage designed for virtualized/cloud environme

Array & Virtual Disk Thin Provisioning

Array and Virtual Disk Thin Provisioning Array Thin Provisioning enables the creation of a datastore that is logically larger than what the array can support. The consequence is there may not be enough physical space available when needed. Array thin provisioning is done in the storage arrays before and/or independent of the virtualization layer. Array thin provisioning allows the organization to maximize space utilization and delay the purchase of additional capacity. It minimizes CAPEX. Array Thin Provisioning: You can overallocate or oversubscribe the storage by allowing a server to claim more storage than has physically been set aside for it. This increases flexibility when you don’t know which hosts will grow, yet you are sure there will be growth Physical storage capacity is dedicated to each host only when data is actually written to the disk blocks Virtual Disk Thin Provisioning "Virtual disk thin provisioning controls how much of the datas

Thin Provisioning - Provisioning - Storage Features

Thin Provisioning Array Thin Provisioning allows you to create a datastore that is logically larger than what the array can actually support. In a general sense, thin provisioning of disks allows you to overpromise what you can possibly deliver. " Space required for thin-provisioned virtual disk is allocated and zeroed on demand as the space is used. Unused space is available for use by other virtual machines. " For example, if an administrator allocates 200 GB to a new virtual machine, and if the virtual machine uses only 40 GB, the remaining 160 GB are available for allocation to other virtual machines. As a virtual machine requires more space, vSphere provides additional blocks (if available) to it up to the originally allocated size, 200 GB in this case. By using thin provisioning, administrators can create virtual machines with virtual disks of a size that is necessary in the long-term without having to immediately commit the total dis

Thick Provisioning - Provisioning - Storage Features

Thick Provisioning Thick provisioned disks Thick virtual disks, which have all their space allocated at creation time, are further divided into two types: eager zeroed and lazy zeroed. Lazy Zeroed Thick (aka LazyZeroedThick) A thick disk has all space allocated at creation time. This space may contain stale data on the physical media. Before writing to a new block a zero has to be written. The entire disk space is reserved and unavailable for use by other virtual machines. " Disk blocks are only used on the back-end (array) when they get written to inside in the VM/Guest OS. Again, the Guest OS inside this VM thinks it has this maximum size from the start. " The blocks and pointers are allocated in the VMFS, and the blocks are allocated on the array at creation time. Also, the blocks are not zeroed or formatted on the array. This results in a fast creation time. "At a later point in time when data needs to be written to the disk, the write proces

Virtual Disk Provisioning

Virtual Disk Provisioning VMware vSphere virtual disks or VMDKs (virtual machine disks) can be provisioned in three different formats: Thin, Lazy-Zero Thick, or Eager-Zero Thick. The differences are in the way data is preallocated and whether blocks are zeroed at creation time or run-time. With the exception of products such as VMware FT, Microsoft Clustering Service, certain appliances, etc., the choice of virtual disk formats is left to the administrator. Thick Thin Lazy-zeroed Eager-zeroed Creation time Fast Slow (Faster with VAAI) Fast Zeroing file blocks File block is zeroed on write File block is zeroed when disk first created File block zeroed on write Block allocation Fully preallocated on datastore Fully preallocated on datastore. File block allocated on write. Thin-provisioning is a solution where the storage provider reports that

Storage DRS

Storage DRS " Storage DRS (SDRS) provides smart virtual machine placement and load balancing mechanisms based on I/O and space capacity. In other words, where Storage I/O Control (SIOC), introduced at vSphere 4.1, reactively throttles hosts and virtual machines to ensure fairness, SDRS proactively makes recommendations to prevent imbalances from both a space utilization and latency perspective. More simply, SDRS does for storage what DRS does for compute resources. " Create datastore cluster via vSphere Client: Create datastore cluster via vSphere Web Client:  "SDRS offers five key featues: 1 Resource aggregation - grouping of multiple datastores, into a single, flexible pool of storage called a Datastore Cluster Initial Placement - speed up the provisioning process by automating the selection of an individual datastore Load Balancing - addresses imbalances within a datastore cluster Datastore Maintenance - when en

Storage vMotion

Storage vMotion VMware vSphere Storage vMotion facilitates the live migration of virtual machine files from one datastore to another without service interruption. Storage vMotion is to virtual machine files as standard vMotion is to virtual machine running instances.  Using Storage vMotion, a virtual machine can be migrated from one datastore to another while the virtual machine is running, i.e. without downtime. A virtual machine and all its virtual disks may be stored in a single location, or in separate locations.  Storage vMotion offers an attractive method for migrating virtual machine data from one set of storage to another, as the virtual machine can remain running while the data movement happens in the background without involvement or even awareness of the virtual machine’s OS. The virtual machine itself does not change hosts during a migration with Storage vMotion, only the VM virtual disks are migrated. VMware Storage vMotion describe