INF-STO1198 – vSphere Storage Features & Enhancements

For the presentation, click here. (Requires VMworld 2012 Socialcast login)

Cormac Hogan, VMware, Inc., Snr. Technical Marketing Architect – Storage


VMFS scalability improved from VMFS-3 to VMFS-5. What happened to VMFS-4? No one knows. 🙂 For my money, the most significant improvement is full support for VAAI based ATS locking. This overcomes the scale limitation of the aging SCSI-2 reservation mechanism (SCSI-2 was ratified in 1994.) Ok, that’s all very interesting, but why do we care? ATS improves linked clone scalability from 8 to 32 concurrent host locks per file on VMFS-5. This brings linked clones on VMFS in line with NFS. Personally, I think this is still far too low, but it’s a start. Other enhancements are listed in the table below.

A VMFS-3 filesystem, can be upgraded non-disruptively, i.e. with VMs running on it. However, upgraded VMFS filesystems have the limitations shown in the table below. To maximize VMFS-5 capabilities, you should create a new filesystem and migrate your VMs.



Linux has fsck, Windows has chkdsk. Has it bugged you that you can’t check a VMFS filesystem with out-of-box tools? Worry no more. vSphere 5.1 introduces VOMA, the vSphere On-disk Metadata Analyzer. VOMA is an ESXi 5.1 command line tool that is used to check VMFS 3 and VMFS 5 filesystems for metadata errors. It is not a data recovery tool. So, it’s not clear to me how metadata errors should be corrected. Perhaps we are to vMotion and recreate filesystems that have errors.


There were a bunch of new VAAI primitives added to vSphere 5.0. Some of the more interesting ones for me are thin provisioning out of space (OOS) alarm, VM stun, and thin provisioning UNMAP.

Let’s start with thin provisioning OOS alarm and VM stun. At the risk of heretical speech, I’ve never really liked the idea of thin provisioning at the array. My concern has been that the hypervisor is writing into the array without a proper understanding of the actual free space. And when you suddenly hit the out of space wall, all VMs on the affected datastore get suspended. This situation doesn’t exactly promote continuity of service.

Well, with OOS alarm, the array can now surface low space conditions to the hypervisor. These warnings appear as an advanced warning in the vSphere UI. So, now the event is actionable by the vSphere administrator (or orchestration) instead of coming as a complete surprise. Furthermore, using the VM Stun primitive, vSphere will only suspend VMs that try to allocate more blocks from full storage. VMs that don’t need new blocks are unaffected. While I don’t think thin provisioning is right for every situation, these new VAAI primitives make doing it on the array tenable.

The Thin Provision UNMAP primitive addresses the issue of reclaiming blocks that a thin provisioned VM is no longer using. Reclamation is initiated by ‘vmkfstools -y’ which in turn makes the UNMAP call.


VMware View 5.1 introduces support for VCAI (View Composer API for Array Integration.) VCAI is a technology preview (read: unsupported) that offloads View clone operations to the NAS array via vCenter 5.0+ and the VAAI NAS snapshots primitive. The benefit of this technology is reduced desktop provisioning time and reduced host resource utilization.

VASA and Profile Driven Storage

VASA (vSphere APIs for Storage Awareness) surfaces storage attributes to vSphere 5.0. As a result, Administrators can see storage device capabilities in the vSphere client and make informed decisions about VM placement on storage.

Furthermore, vSphere 5.0 introduces Profile Driven Storage which provides a way to easily check whether a VM is running on storage having appropriate characteristics. The Storage Profile references VASA-surfaced storage characteristics, or uses user defined storage characteristics. In turn, the Storage Profile is associated with the VM. The administrator can then use vCenter to report which VMs are complaint or non-compliant. Non-compliant VMs can then be vMotioned by the administrator. If you would like automatic vMotion of non-compliant VMs, vCloud Director 5.1 now supports Storage Profiles for automatic placement of vApps.

Storage I/O control and DRS

VMware introduced introduced SIOC (Storage I/O Control) for block storage in vSphere 4.1 to guarantee allocation of I/O resources on a per-VM basis. vSphere 5.0 adds support for SIOC on NFS storage.

Storage DRS (SDRS) was implemented in vSphere 5.0 to help avoid hotspots in storage bandwidth and capacity. It addresses initial VM placement a well as ongoing balancing. It supports affinity and anti-affinity rules and works with VMFS-3, VMFS-5 and NFS filesystems.

In vSphere 5.1 SIOC is automatically enabled in stats-mode only. This is done to better inform SDRS storage placement decisions. Also, vSphere 5.1 SIOC now supports automatic latency threshold calculation instead of using a default latency threshold of 30ms. The calculation is 90% of the latency at peak device throughput. Changing the percentage of congestion or setting a fixed latency threshold is also allowed.

Storage DRS was enhanced in vCenter 5.1 for interoperability with vCloud Director. VCD will use storage DRS for the initial placement of vApps and ongoing management of space utilization and I/O balancing. In vCenter 5.1, DRS has a new Storage Correlation Detector. VASA already does this, but these results are augmented by the SDRS results. Anti-affinity rules can use these results to influence VM/VMDK placement.

Storage vMotion

vSphere 5.0 introduces vMotion of VMs with snapshots/linked-clones, vMotion-based Storage DRS and a new mirror-based based architecture that makes mirroring faster and more reliable.

In vSphere 5.1, a Storage vMotion operation can now perform up to 4 parallel disk migrations.

Protocol Enhancements

vSphere 5.0 introduced a Software FCoE adapter that supports certain NICs with partial FCoE offload. vSphere 5.1 introduced boot from FCoE, jumbo-frame support for all iSCSI adapters, and support for 16Gb FC adapters (throttled to 8Gb).

IO Device Management and SSD Monitoring

vSphere 5.1 introduced esxcli commands to monitor and troubleshoot I/O devices and fabrics. “The commands provide layered statistic information to narrow down issues to ESXi, HBA, fabric and Storage Port.” vSphere 5.1 also introduces SSD monitoring. The default SSD monitoring plugin provides media wearout indicator, temperature indicator, and reallocated sector count.

Space Efficient Sparse Virtual Disks

vSphere 5.1 introduced the Space Efficient Sparse Virtual Disk (SESparse) format. In vSphere 5.1, VMware View is the only VMWare product that will use SESparse. SESparse also replaces the vmfsSparse format used by redo logs and linked clones. SESparse implements an automated mechanism for reclaiming stranded space and uses a variable block allocation size. The SESparse reclamation mechanism first makes a call to VMware tools to re-organize all used blocks to the beginning of the VMDK and all unused blocks to the end of the VMDK. Then it makes a call to the kernel to send a SCSI UNMAP (for block) or NFS truncate (for file) to the storage array.


  • Is SESparse Re-Organization using VAAI offload? No, because the VM block size is more granular than the VMDK block size.
  • What about vVols? vVols are a future product direction. With vVol the array will manage and hand off VMDKs to the hypervisor, not LUNs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.