VMware Virtual SAN (vSAN) is a high-performance data storage corporate solution for a hyper-converged infrastructure. vSAN allows to merge SSDs and common HDDs connected to local ESXi servers into a shared highly resilient data storage that can be accessed by all vSphere cluster nodes. Earlier to provide high availability, VMware administrators had to use SAN, NAS or DAS. When using vSAN, the need of a dedicated external shared storage is eliminated, and a new software level is added that enables using local disks of separate local servers to provide the same fault tolerance and SAN set of features.
vSAN support is integrated into the hypervisor kernel, which results in faster IOPS and data moving (most other virtual storage environment solutions are implemented as a separate appliance working over the hypervisor). In this article, we’ll tell about the main VMware vSAN features and show you how to deploy and configure vSAN 6.5 cluster.
VMware vSAN Main Features
- Integrated security and high data availability with fault tolerance, asynchronous replication over long distances and stretched clusters of geographically distributed sites;
- Using distributed RAID and cache mirroring to protect data against the loss of a single drive, a server or even a whole rack;
- Minimization of storage latency by speeding up read / write operations from disks due to the built-in cache on the server, stored on local SSDs;
- Software deduplication and data compression with minimal CPU and memory overhead;
- The ability to increase storage capacity and performance without any downtime by adding new servers or drives;
- Virtual machine storage policies allow automatic storage balancing, resources allocation and QoS;
- Full integration with VMware stack, including vMotion, DRS, High Availability, Fault Tolerance, Site Recovery Manager, vRealize Automation and vRealize Operations;
- iSCSI connection support;
- Direct connection of two nodes with a crossover cable (VSAN Direct Connect);
- Full support of vSphere Integrated Containers to run containers for DevOps on vSAN;
- No need to deploy any additional components, virtual machines or management interfaces.
VMware vSAN: System Requirements
- VMware vCenter Server 6.5 and ESXi 6.5 hosts;
- At least 3 hosts in a cluster (64 maximum), however vSAN can be implemented on two hosts, but a separate witness host is required;
- Each ESXi server in a vSAN cluster must have at least one SSD (flash drive) for cache and at least one SSD/HDD for data;
- SATA/SAS HBA or RAID controller in the pass-through or RAID 0 mode;
- At least 1 GB network card (10 GB recommended);
- All hosts must be connected to vSAN network over L2 or L3 network;
- Multicast must be enabled on physical switches processing vSAN traffic;
- Both IPv4 and IPv6 are supported;
- The information on the compatibility with the specific hardware see in the corresponding document on VMware website.
VMware vSAN Licensing
vSAN is licensed per number of CPUs, virtual machines or concurrent users and delivered in three editions: Standard, Advanced and Enterprise.
- Enterprise is required to use QoS and stretched clusters;
- Advanced license is needed to support deduplication, compression and RAID 5/6;
- Standard provides basic features.
A vSAN license may be used in any vSphere 6.5 version.
There are additional ways to save money by using Virtual SAN for ROBO package or purchasing 25 Standard or Advanced licenses for remote branches. A detailed information on vSAN licensing is available on the VMware website.
vSAN Network and Port Configuration
Prior to vSAN configuration, the VMkernel port for vSAN traffic is configured on each host of a cluster. In vSphere web client, select each server on which you want to use vSAN. Go to Configure -> Networking -> VMKernel Adapters and click Add Host Networking. Make sure that VMkernel Network Adapter type is selected and click Next.
Create a new virtual switch (New standard switch) and click Next.
Using the green plus icon, add physical adapters to the switch. In productive environments, it is recommended to provide additional redundancy due by using several physical NICs.
Specify the VMkernel port name and its VLAN ID if necessary. Check Virtual SAN option and click Next.
Specify VMkernel network options.
If you configure a vSAN network in a test stand with a limited number of physical interfaces, select Management Network option and check Virtual SAN in the list of supported services. In this configuration, the vSAN traffic will go through the common management network, but it is not recommended in productive networks.
VMWare vSAN Configuration
As we have said, to configure vSAN it is not necessary to install any additional software on a hypervisor, all features are already available. All an administrator has to do is to configure vSAN using vSphere web client (the HTML5 client is not yet supported by vSAN).
To enable vSAN, find the cluster you need in the vSphere console and go to the Configure tab. Expand the Virtual SAN section and select General. It should indicate that Virtual SAN is not enabled. Click Configure.
By default, any suitable disks will be added to the vSAN storage. To select the disks manually, change the value of Disk Claiming parameter to Manual. Here you can also enable the compression and fault tolerance.
On the Network Validation page, you will see the confirmation that each host in a cluster is connected to vSAN network.
Check the selected settings and click Finish. Wait till the task is completed, and after that your virtual SAN network will join all local disks on the servers of a selected cluster into a distributed vSAN storage. This storage is a single VMFS datastore, to which you can immediately put virtual machines. vSAN settings can be changed later in the same section (cluster tab: Configure -> Virtual SAN).