- Linear Technology
- Helical Scan
- Tape drive comparisons
- Tape Futures
- Tape Error Handling
There are six mainframe suppliers of z/OS virtual tape. The three traditional suppliers are IBM with its Virtual Tape System (VTS), Oracle with its Virtual Tape Storage System (VTSS) and Dell-EMC with the disk-only DLm. The Fujitsu Eternus C8000 series is unique in that it supports both z/OS mainframes and Open Systems.
The IBM mainframe offering is the TS7700. This comprises a virtual tape grid system with synchronous replication and automated fail-over. The cluster currently has six nodes or eight nodes, depending on the model. The management is an integral part of the VTS and the intelligence is outboard of the mainframe servers. This means that the location and number of virtual volumes are virtualised, which will assist with content-based access. It is possible that IBM will add tape data indexing to the tape grid in future, outboard from the mainframe.
The IBM TS7720 is a VTH solution. It uses 3 TB SAS disks in a RAID 6 format, with a total potential capacity of 6,048 TB. It also has the option to backend this data with physical tape drives.
The IBM TS7760 is a VTA solution. It uses 8 TB SAS disks in a distributed RAID format, with a total potential capacity of 2,450 TB. The TS7760 always has backend tape drives and supports between 4 and 16 physical drives, and up to 496 virtual drives accessing up to 4,000,000 virtual volumes. If the TS7760 is in an 8 node cluster, then all these numbers can be multiplied by 8 to get maximum sizes. IBM intends to add a Cloud offload facility to the 7760.
All models can connect to TS1150 or 3592 tape drives in an IBM TS3500 tape library and support 16 Gbps IBM FICON.
The Oracle / StorageTek offering is the VSM-7. The new feature that Oracle has brought in with the VSM-7 is cloud connectivity. The backend configuration can be three tier, with a disk cache, a physical tape layer and finally a cloud layer. An alternative is to define a two tier system with just a disk cache that feeds into a cloud backend. Oracle claims that this will give you a mainframe virtual tape system with unlimited storage.
Oracle has adopted a modular approach to mainframe tape virtualisation, the four modules being: tape libraries, tape drives, virtual tape disk buffers, and software. This means that most of the logic and intelligence is still held on the mainframe. The VSM uses two Sun SPARC M7 servers for the disk cache and just comes as a 2 node cluster. It uses 8 x 16Gb FICON for tape drive connection and Ethernet IP for replication. The VSM uses triple parity RAID to protect the high capacity drives from failure. Up to 512 virtual drives can be configured.
The controlling software is z/OS host-based, and Oracle claims that this gives them enormous scalability, with up to 256 VSM systems as a single image.
The VSM supports both synchronous and asynchronous replication of virtual tape volumes within a clustered VSM environment with VSM4 and upwards. Synchronous replication means that the data must be stored in both VSM caches before the virtual tape drive completes the rewind/unload command and passes control back to the application.
Both Asynchronous and Synchronous replication modes can be used simultaneously in the same VTSS by assigning tapes to different SMS management classes.
Dell EMC offers a disk only virtual tape solutions that has the following components:
Between 1 and 8 Virtual Tape Engines or VTEs can be configured depending on your performance and drive requirements. z/OS sees these VTEs a set of IBM tape drives, and the VTE can emulate IBM 3480, 3490, and 3590 drive types without needing any application modifications. This emulation is done by mainframe virtualtape emulation software called Virtuent.
Each VTE can be configured with up to four FICON channels, so a fully configured DLm8500 with 8 VTEs can be addressed by a maximun of 32 FICON channels. A DLm single-rack configuration can have 1 or 2 VTEs
The backend data is stored on Data Domain virtual tape devices or Cloud storage. Up to 2 Data Domain devices can be configured. The DLm also have a facility to migrate older data to external storage, such as the Cloud.
Replication to a remote site can be configured in different ways.
Standard replication will periodically create a snapshot of the local DLm storage and file systems and then asynchronously transfer the data at the time of the snapshot to the remote DLm. This means that any tapes created after the snapshot will not be replicated until the next snapshot happens. This means that there is a potential data loss, as if a diaster happens, and data created after the snapshot is lost. For most tape work, this is not an issue.
However for some applications, tape data is critical and must be consistent between primary and secondary sites. For this data, DLm offers Guaranteed Replication (GR). GR can be implemented in three different ways,
GR on close - Guaranteed Replication forces a replication refresh whenever the host closes a tape file.
GR on SYNC- Guaranteed Replication forces a replication refresh on every SYNC command received from the host.
Replication on RUN - Replication on RUN forces a replication refresh when the host performs a RUN(Rewind-Unload) command
Fujitsu sells the virtual tape system which was previously known as CentricStor, which was renamed to ETERNUS CS High End in 2009 (when the Joint-Venture ended and Fujitsu Siemens Computers became FUJITSU). Again in 2013 the appliance was renamed to ETERNUS CS8000. This supports a great range of existing mainframe and major Unix and Windows operating systems as well as major tape libraries, all in parallel within one single system if required. Furthermore ETERNUS CS8000 combines VTL and the NAS option to consolidate backup, archiving, compliant archiving and second-tier file storage in one appliance. It is possible to cluster two VTLs in different sites and mirror the disk cache between them, this giving robust remote disaster recovery.
The Eternus comes in several models, starting with a single VCT that supports just 2 real tape drives, to disk only models, ViS models for NAS support and large multi-purpose models that can support up to 112 real tape drives with a total capacity of 96 PB.
With the CS8000 VTL, you have a number of different options for storing your tape data, depending on your requirements for availability, speed of access and cost. These options are:
While the CS8000 is marketed by Fujitsu Technology Solutions, it was developed in Europe. Its market share is limited mostly to EMEA with some presence in Japan, but has little presence in the Americas.
Optica Technologies offers the zVT family mainframe virtual tape solutions. They fully emulate physical 3490 and/or 3590 tape drives, and can connect to the mainframe via FICON or ESCON channels. The disk only storage repository for the virtual tapes can be internal to the zVT, or attached externally via NFS or Fibre Channel (model dependent).
The zVT-5000-iNAS is a fully integrated solution that includes NFS backend storage, providing up to 1 PB of storage repository in a single frame. The 5000-iNAS includes features such as deduplication/compression, encryption, data resiliency and replication.
The next generation zVT 5000-iNAS High Availability (HA) multi-node base configuration contains 2 zVT Virtual Tape Nodes (VTNs) and 2 zVT Intelligent Storage Nodes (ISNs) the idea being to eliminate all single points of failure. This configuration is expandable from 2 to 8 VTNs.
The two VTNs support 4 FICON channels and support 512 virtual tape drives.
The two ISNs contain 144TB of raw capacity, or 72TB of useable NFS storage capacity. When hardware compression and deduplication are applied, this is equivalent to 384TB of effective storage capacity, assuming a 4:1 benefit. The zVT ISN storage devices employ data encryption both at rest and in flight. The data can also be stored as WORM. The storage data can be replication between ISNs for secure data management and disaster recovery purposes.
The cluster needs a central database for the VOLSER database and this is held on the Primary VTN, while the secondary VTNs use the Primary database for all file access requests. A locking mechanism is used to prevent more than one VTN from accessing the same tape at the same time. If the Primary VTN fails, a Secondary VTN becomes the Primary via an automated subsecond selection process.
If one of the storage ISN fails, the ISN automatically moves filesystems from the failed ISN to its failover partner ISN. Rather than use traditional RAID, the zVT distributes data across the cluster with redundancy, so that it can cope with 3 concurrent disk failures per ISN while maintaing IO functionality. Optica claims this means that disk rebuilds are faster, and the cluster has 150% better protection than traditional RAID-6.
The zVT-5000-FLEX supports attachment to any customer-provided NFS/NAS or Fibre Channel/SAN storage. This allows an existing open systems storage array to also be used to support mainframe tape, allowing existing open systems storage and replication processes to also include mainframe tape applications.
The 5000-FLEX comes standard with HW compression and supports deduplication, replication, encryption and compression. The zVT 5000-FLEX can be deployed in a multi-node configuration with NFS storage for additional scalability and resiliency.
The zVT-3000i is all-in-one mainframe virtual tape offering with 4TB of internal storage in a 2U chassis. Replication to a remote zVT is fully supported. Both the zVT-5000-iNAS and zVT-5000-FLEX support HA multi-node clustering when connected to external NFS/NAS storage.
Luminex MVTe is a disk-based mainframe virtual tape offering with highly available, high performance storage options that scale to petabytes of virtual tape capacity. The CGX virtual tape control units can emulate 3490 or 3590 devices support both 8Gb FICON and ESCON channels. The entire solution is mainframe application transparent.
There are a number of options for backend storage, including Luminex MVT, HDS and NetApp disks. MVTe’s CloudTAPE can also be used as a final tier for infrequently-accessed and long-term retention data, such as archives or additional copies of tape data. CloudTAPE supports the major Cloud providers, which lets you use your existing cloud storage services for mainframe data and so take advantage of economies of scale. CloudTAPE now supports intra-cloud storage tiering, DataStream Intelligence metadata tagging, deeper Luminex Replication and Monitoring integration, and versioning support.
Data replication is provided by Luminex Synchronous Tape Matrix (STM). This can provide synchronous mirrored writes to multiple storage systems, and also provide host I/O capabilities from any available storage system within the layer. Any MVTe control unit can service host I/O for any MVTe storage at anytime so operations can continue without interruption, even in the event of a site failure. There is no 'primary' and 'secondary' configuration; any MVTe control unit can service host I/O for any MVTe storage at anytime. This allows operations to continue, without interruption, even in the event of multiple component failures across all layers of computing, connectivity and storage.
A nice feature here is the ability to do non-disruptive Disaster Recovery Testing. By selecting 'DR Start' from a GUI, the MVTe at the disaster recovery site will prepare a DR environment allowing read/write activity, without affecting the original data, and all without stopping replication from the primary data center.
Just for completeness, some vendors offer a software only solution. An example is CA with its Brightstor product.
The following table describes the Oracle VSM 6 system, an IBM TS7700, a Fujutsu CS8400, an HDS VTF a Dell EMC DLm 8000 and an Optica zVT 5000-iNAS. The table includes the terms used by each supplier to describe the components.
|Oracle VSM 7||IBM TS7760||Fujitsu CS8400||Dell EMC DLm 8500||Optica zVT 5000-iNAS||Luminex MVT 2.0|
|Maximum number of virtual drives||512 on a single VSM, up to 256 VSM so 135,680 maximum||256-2048 (these numbers refer to 1 or 8 node clusters, the 7760 can have any number of nodes between 1 and 8 and so scales appropriately)||1280 per cluster||512 per VTE, maximum 3072||256 - 2048 (256 per zVT node x 8)||4096 per FICON Channel|
|Maximum number of virtual volumes in one virtual tape system||100,000 active in the VTSS cache, no effective limit for VTVs migrated to MVCs; Virtual Tape Volume (VTV)||4,000,000; Virtual Volumes||3,000,000; Virtual Volumes||unlimited||Over 1,500,000 virtual volumes supported in production. Unlimited support based on lab testing.||Not stated|
|Back end storage||32 drives per VSM, called Real tape Drives (RTD) of T10000B or T10000C; Tapes are called Multi-Volume Cartridge (MVC)||128 tape drives of TS1150, TS1140, TS1130 or TS1200. Tapes are called "Stacked Cartridge"||112 tape drives in 10 Libraries, IBM, Oracle, DLT drives supported.||Backend is 1 or 2 EMC Data Domain libraries.||NFS/NAS storage with HYDRAstor technology||Virtual tape data uses RAID6, OS data uses mirrored hard drives, Cloud storage for deep archive|
|Server Support||z/OS variants||z/OS variants||z/OS + various UNIX, LINUX and Windows||z/OS variants||z/OS variants||z/OS, z/VM, z/VSE, OS/390|
|Library Support||Oracle libraries||IBM libraries||Fujitsu, IBM, Oracle, Quantum libraries||Disk only, using Data Domain virtual tape libraries||up to 32 zVT Libraries per node and support for LTO 4,5 and 6 connectivity||No physical Tape. Different VTL Open Systems supported|
|Maximum disk cache capacity (native not compressed or deduplicated)||825 TB native using 8TB HDDs. will scale up to 211 PB over 256 systems; Virtual Tape Subsystem Buffer (VTSB)||2.4 PB, or 8.96 PB in an 8 node cluster; Tape Volume Cache||19 TB up to 96 PB; Tape Volume Cache||2,000 TB native with Data Domain storage||8TB - 1PB native in a single frame and scales to 11.88PB native||Support for a wide range of disk systems (FC / NAS / VTL) so depends on solution.|
|Controlling Software||HSC software runs on z/OS Host||AIX based Virtual Tape Controller software, within the VTS.||Eternus Software runs within the VTS.||EMC Virtuent software||zVT operating system (no special host software required). Integration and testing completed with z/OS tape management software and tools||Management Software runs on Gateway and external Appliance|
|Connectivity||8*16Gb FICON between host and tape drives.
4*Gbe and 4*FC for replication
|8-64*16Gb FICON to RTDs.||4-40*16Gb FC and FICON||32*16Gb FICON||2-16*8Gb FICON or 4-32 ESCON||2*8Gb FICON per Gateway Backend: FC or GbE|
|Scalabilty||As the controlling software and database is in the Host, up to 256 VSMs can be clustered together, and this will appear to be seamless to the z/OS operating system, so scaling from 1.2PB to 409PB assuming 4:1 compression. The backend tape drives are located in an STK siloplex, which has almost unlimited capacity||Two VTS systems can be defined to one TS3500 tape library.||Fujitsu states that the grid architecture makes the device extremely scalable, with the CS8200 being a scale-up system and the CS84000 a scale out system.||Can be extended by adding disk capacity to Data Domain libraries||Up to 8 zVT nodes in a HA multi-node Cluster||Given by attached Disk Systems Capacity or VTL Capacity and functionality|
|Mirroring capability||Either synchronous or asyncronous||A combination of aynchronous (tape rewind not issued until both virtual tape copies are written) and asynchronous, as selected by DFSMS policies.||Asynchronous replication and synchronous mirroring. The CS8800 provides comprehensive 2 site mirroring support.||DLm replication (a separately licensed feature)||Asynchronous replication for 5000-iNAS (Synchronous replication supported with 5000-FLEX and customer storage)||Data replication is provided by Luminex STM|
|GDPS support||Full support, no additional scripting needed||Yes, VTS can be fully integrated into GDPS, so second site tape work would be 'frozen' if there was a problem at the primary site||No data||No, but utilises EMC's GDDR software instead||No||No|