vSAN (VMware Virtual SAN)


vSAN, formerly called VMware Virtual SAN is an example of Software-Defined Storage. What VMware has done is recognised the potential behind their server virtualisation philosophy and extended it to include date storage. Virtual SAN is built into the VMware hypervisor. It sits directly in the I/O data path and so can deliver better performance than a virtual appliance or an external device without much CPU overhead. The software that manages and controls the storage is pre-installed on the hypervisor which allows it to share out the storage resource in the same way it shares out CPU and memory.

VMware HCI combines products like vSAN and vSphere with VMware NSX, a network virtualization product, and VMware vRealize, a hybrid cloud management product, to provide a comprehensive HCI capability. HCI can be extended to the public cloud as vSAN has native services with six of the top cloud providers: Amazon, Microsoft, Google, IBM, Alibaba and Oracle.

The vSAN is managed from the vSphere Web Client and because it is located in the hypervisor it integrates with all the VMware goodies, including vMotion, HA, Distributed Resource Scheduler, VMware vCenter Site Recovery Manager and VMware vRealize Automation.

Managing vSAN does not require any specialized skillset as it can be managed end-to-end through the familiar HTML5-based vSphere Web Client and vCenter Server instances, including the VMware Cloud Foundation stack. VMware vRealize Operations within vCenter enables you to monitor and analyse your vSAN deployment, all from vCenter.
vSAN Support Insight is a tool that helps keep vSAN running in an optimal state by providing real-time support notifications and actionable recommendations. The analytics tool can also optimiSe performance for certain scenarios with recommended settings
With vSAN 7, replication objects are now visible in vSAN monitoring for customers using VMware Site Recovery Manager and vSphere Replication. The objects are labeled 'vSphere Replicas' in the 'Replication' category.

Physical Disk Management

Like any virtualisation product, VMware splits up the physical storage into logical pools of capacity that can be shared out flexibly among the hosted VMs. VMware refers to this storage virtualisation as the Virtual Data Plane. vSAN supports both deduplication and compression

You have 2 choices for data storage, all-flash or a hybrid flash with magnetic disk. An all flash solution still offers tiering with 2 levels, a high performance, write-intensive, high-endurance caching tier for the writes and a read-intensive, durable, cost-effective, flash-based device tier for data persistence. With an SSD/Disk hybrid solution every write I/O will go to SSD first.

A vSphere host does not have to contribute storage to the vSAN cluster, but if it does it requires a disk controller. This can be a SAS or SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must either just deliver plain RAID0 striping, or preferrably be running in Pass-through mode, where the disks are not in any RAID format but just presented as JBOD (just a bunch of disks). The vSAN looks after data resilience by taking copies of entire virtual disks, the number of copies is controlled by policies and can be set differently for individual VMs.


The vSAN architecture means that scaling is elastic and non-disruptive. Both capacity and performance can be scaled at the same time by adding a new host to the cluster (scale-out); or capacity and performance can be scaled independently by merely adding new drives to existing hosts (scale-up). Ready Made servers, pre-built by thirdy party suppliers, can just be plugged into a vSAN, alowing almost instant scale-out. You can add more SSD for performance or more hard drives for capacity.

vSAN supports stretched clusters with local protection and synchronous replicating data between two geographically separate sites. vSAN leverages distributed RAID and cache mirroring to ensure that data is never lost if a disk, host, network or rack fails. It supports vSphere availability features, such as vSphere Fault Tolerance and vSphere High Availability.
This protects from an entire site failure as well as local component failures, with no data loss and near zero downtime. vSAN 7 introduced the ability to redirect VM I/O from one site to another in the event of a capacity imbalance. Once the disks at the first site have freed up capacity, you can redirect I/O back to the original site without disruption.
vSAN 6.7 and above supports Windows Server Failover Cluster (WSFC) technology, and vSphere Replication for vSAN, which provides asynchronous VM replication with recovery point objectives (RPOs) of up to five minutes.

System Requirements and Limits

Always check your EMC documentation for current requirements, but at time of writing, the requirements are:
Each Hardware Host must have at least a 1GB Ethernet or a 10Gb Ethernet capable network adapter. 10 Gb is recommended, and is required for an all flash architecture. It must have a SATA/SAS HBA or RAID controller (in passthrough or RAID0 mode) and at least one SSD and one HDD for each capacity-contributing node Cluster. The usual minimum cluster size is three hosts as this configuration enables the cluster to meet the lowest availability requirement of tolerating at least one host, disk, or network failure. However it is possible to install a two node cluster in branches or remote offices.
Do not mix the controller mode for vSAN and non-vSAN disks to avoid handling the disks inconsistently, which can negatively impact vSAN operation.
If the vSAN disks are in RAID mode, the non-vSAN disks must also be in RAID mode.
When you use non-vSAN disks for VMFS, use the VMFS datastore only for scratch, logging, and core dumps.
Do not run virtual machines from a disk or RAID group that shares its controller with vSAN disks or RAID groups.
The software on must be VMware vCenter Server 6.0 or above, and one of; VMware vSphere 6.0, VMware vSphere with Operations Management 6.0 or VMware vCloud Suite 6.0.

vSAN will support up to 64 nodes per cluster and up to 200 virtual machines per host. The Maximum Virtual Disk size is 62TB. Each host can hold between 1 and 5 disk groups.

On each vSphere host, a VMkernel port for Virtual SAN communication must be created. A new VMkernel virtual adapter type has been added to vSphere 5.5 for Virtual SAN.

The VMkernel port is labeled Virtual SAN traffic. This new interface is used for host intra-cluster communications as well as for read and write operations whenever a vSphere host in the cluster is the owner of a particular virtual machine, but the actual data blocks making up that virtual machine's objects are located on a remote host in the cluster.

Hosts can be compute only, but it is not recommended to have too many dedicated compute servers as it is best to spead the storage workload around a lot of servers.


Traditional storage delivers add-ons like snapshots and replication at hardware level and the storage manager has to look at the business requirements of applications and work out how to apply those requirements to the hosting hardware. Software-Defined Storage as implemented by Virtual SAN uses the Virtual Data Plane (VDP) to handle these requirements, so the administrator works with the applications and the VDP works out how to apply these to the underlying hardware. This means that all the VMware extras like compression, replication, snapshots, de-duplication, availability, migration and data mobility ar available and can be configured differently for each individual VM.

The VDP also allows you to define service level policies for each VM for things like availability and performance. What this means is:
- Availability means you can specify how many host, network, disk or rack failures to tolerate in a Virtual SAN cluster when setting the storage policy for each VM. The VDP then translates this into how many copies of the VM are stored and where to meet those policies.
- Performance means that you can set policies at individual VM level that dictate what percentage of your read I/O you can expect to come from SSD.
VMware refers to this as the Policy-Driven Control Plan and you can program the policies using public APIs, and with scripting and cloud automation tools.

Basic VSAN Setup and Configuration

VSAN looks like a plugin within VMware and if you are familiar with VMware then you will find it easy to setup and configure, especially if you are familiar with vSphere. There is a setup wizard to help you add the hosts that have storage attached to them to the VSAN cluster. The process goes like this:

Once the hosts are added to the network you activate the VSAN and then decide how much of the storage to add to the VSAN.

You will now see a prompt asking whether you want to configure VSAN automatically or manually. If you take the automatic option then all available disks will be claimed by VSAN. Otherwise click manual and then you will have to create your disk groups manually through the disk management tab. To do this you select a host and then select the create disk group icon. You must add at least one SSD and one hard drive to a disk group and at least three of the hosts need to have disk groups created.
You can automate this process with PowerCLI commands, including a set of vSAN provided full-featured PowerCLI cmdlets. vSAN 7 provides new SDK and API updates to enable more enterprise-class automation by supporting REST APIs

Once you create the disk groups the VSAN datastore will be available and will show the combined storage capacity of all the drives. At this point your VSAN is complete and ready to use.

Creating a new disk group

Assuming your vSAN cluster is in manual mode, you would use these steps to create a new disk group.

back to top

Storage Area Networks

Disk Protocols

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best