VPLEX Configuration

In practical terms, configuring and managing a VPLEX involves creating virtual volumes, then mapping them to hosts. The easiest way to do this, especially if you are a beginner, is to use the GUI provided. In the screenshot below, you can see the three main entity types used by VPLEX; Hosts - the servers to which you are going to attach your storage, Physical Storage - the actual hardware at the back end, and Virtual Storage - the volumes that are presented to the hosts.
The GUI also illustrates the way the various entities are linked together just like a file system directory tree.

Storage volumes are the physical LUNs that exist on the attached physical storage arrays. They are presented to the VPLEX via the back-end ports.
Storage volumes are split up into Extents
Devices are created from one or more extents.
Virtual volumes are built from devices. A standard virtual volume is built from a simple device while a distributed virtual volume is built from a distributed device.
Distributed devices are mirrored devices that are spread across two VPLEX clusters connected together into a metro-plex.


The VPLEX Command Line Interface takes a little getting use to, as it is arranged in a hierarchical context tree, a lot like a filespace directory. Some commands are universal, like cd, ll, exit, but most commands will only work if you are in the appropriate context, or if you specify the command with the fully qualified context name.

The top level context is called root and it contains eight sub-contexts. The main sub-context is clusters as that is used to manage all the storage component, like clusters, devices, extents, system volumes and virtual volumes, register initiator ports, export target ports, and storage views.
You can use the help command to display a list of all commands available from the current context. (help -G will show these commands, excluding the global commands). ll will list out all configuration items in the current context, so use ll in the root context to list out all 8 sub-contexts.

Some basic CLI commands are shown below by high level function. You will need a userid with administration authority to run most of them. This also assumes that the initial VPLEX setup and configuration is complete.

Instead of navigating to a focus or typing in the full location name, a better idea is to use aliases. Some aliases are provided by the Vplex engineering team. For example instead of typing in

ll /clusters/cluster-1/storage-elements/storage-arrays/

You can define an alias called 'showarrays', like this

VPlexcli:/> alias -n showarrays -t 'll //clusters/cluster-1/storage-elements/storage-arrays/* -f'

Now you just need to type in 'showarrays' to see which arrays are defined. This command checks cluster-1, you would need to amend it for cluster-2, or for a different cluster name if you are not using the default names. If you define the alias youself as above it will work, but will not survive a management server reboots. To make aliases persistent over a reboot, add them to the VPlexcli-init file. If you have a clustered Vplex Metro systems, the VPlexcli-init file will need to be updated on the management server at both Vplex cluster-1 and 2. You can define your own aliases for commands that you use frequently.

EMC supplies the showarrays alias, and aliases for commonly used commands, in a rar file. See the EMC documentation for details. If you define the alias youself as above it will work, but will not survive a management server reboot. To make aliases persistent over a reboot, add them to the VPlexcli-init file. If you have a clustered Vplex Metro systems, the VPlexcli-init file will need to be updated on the management server at both Vplex cluster-1 and 2.

Identify free backend storage

The context for storage volumes is /clusters/cluster name/storage-elements/storage-volumes
You need to substitute a cluster name in the path above. To find out what the clusters are, use the command

ll /clusters

Arguably the starting point for creating and attaching storage to a host is to find out what back end storage is available. To do this, use the command below. If you want storage from more than one cluster, run the command in the correct context for each cluster.
assuming your clusters are called cluster-1 and cluster-2, and that you are in context /clusters/cluster-1/storage-elements/storage-volumes, now run the command

ll -p **/storage-volumes

and you should see output something like

Create Extents from your selected backend storage

The Extents context is /clusters/cluster name/storage-elements/extents
and the command to create extents is

extent create storage-volumes storage-volumes(csv list) size size -block-offset offset num-extents integer

Most of these parameters can be left to default. 'Size' will default to maximum possible, 'block-offset' is set automatically, 'num_extents' will default to a single extent unless you tell it otherwise.

Create Devices

The context for Devices is /clusters/cluster name/devices
To create a local device from extents, do the following:

Use the ll -p **/extents/ command to display the extents created on each cluster
Identify the extents you are going to use then use the local-devicecreate command to create a local device with the specified name.
The syntax for the local-devicecreate command is

local-devicecreate name name geometry geometry extents extents stripe-depth depth

where name is the name of the new device and it must be unique across all clusters. RAID geometry can be 'raid-0', 'raid-1', or 'raid-c'. extents is a CSV list of path names of the extents to be added to the device. stripe-depth is required for RAID-0 devices, and is in 4K blocks
for example,

VPlexcli:/clusters/cluster-1/storage-elements/extents>local-device -create name Dev01_cluster1 geometry raid-1 extents -/clusters/cluster-1/storage-elements/extents/extent_Symm1172_A10_1, -/clusters/cluster-1/storage-elements/extents/extent_Symm1172_AC4_1

If you are creating devices to convert to distributed devices, then you need to run this through this process again and allocate another local device with identical configuration on a different cluster.

Define Distributed Devices

The context for Distributed devices is /distributed-storage/distributed-devices
To create a distributed device from two local devices with the same capacity: Use the ll -p **/devices command to display the available local devices.

Use the dsddcreate command to create a distributed device.
The syntax for the command is:

dsddcreate name name devices devices logging-volume logging-volumes -rule-set rule-set

The name must be unique across the entire VPLEX configuration.Devices is a CSV list of path names to local devices to add to the distribute device. They need to have the same capacities.

logging-volumes are required for distributed devices, to keep track of any volume changes that might occur if there is a problem with cluster communication. The logs are then used to resynchronise the volumes once the cluster is fixed. If you don't pick a logging volume, VPLEX will try to select one for you. If no logging volume is available, the command will fail
A rule-set is a rule that determined what happens if connectivity between clusters is lost . It picks which cluster will continue to process I/O. If you don't pick a rule set, then VPLEX will use the cluster that is local to the management server

For example, this command lets logging volumes and rule sets default

VPlexcli:/>dsddcreate name DistDev01 devices/clusters/cluster-1/devices/ -Dev01_cluster1,/clusters/cluster-2/devices/ Dev01_cluster2

Define Distributed Virtual Volumes

The context for Distributed virtual volumes is /clusters/cluster name/virtual-volumes
Use the ll -p **/distributed-devices command to display a list of distributed devices on all clusters
Use the virtualvolume create command to create a virtual volume on a specified distributed device.
The syntax for the command is:

virtual-volume create device device set-tier{1|2}

device is the pathname of the device on which to configure the virtual volume.
set-tier refers to the performance tier characteristic of the disk that you are creating, where tier1 in high performance like DMX, and Tier2 is not so high, like Clariion.
For example

VPlexcli:/>virtual-volume create -device/distributed-storage/distributed-devices/DEV01 set-tier1

Export a Virtual Volume to a Host

Host attachment to virtual volumes is based on Storage Views, which are logical zones that contain front end ports, host initiator ports, and virtual volumes. Use the ls command in the storage-views context to display a list of all storage views:


Use the export storage-view add virtualvolume command to add the virtual volume to the storage view.
The syntax for the command is:

export storage-view add virtualvolume view storage-view virtual-volumes - virtual-volumes force

view is simply the context path of the storage view to which you wish to your virtual volume and virtual-volumes is a CSV list of virtual volumes that you are attaching to this view. You would normally just expose a distributed volume to a single host, but a force option is available if you are going to expose a distributed volume to more than one host.
For example:

VPlexcli:/>export storage-view add virtualvolume view APPLUV001 virtual-volumes DEV01

back to top

Storage Area Networks

Disk Protocols

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best