- Linear Technology
- Helical Scan
- Tape drive comparisons
- Tape Futures
- Tape Error Handling
There are six mainframe suppliers of z/OS virtual tape. The three traditional suppliers are IBM with its Virtual Tape System (VTS), Oracle with its Virtual Tape Storage System (VTSS) and Dell-EMC with the disk-only DLm. The Fujitsu Eternus C8000 series is unique in that it supports both z/OS mainframes and Open Systems.
The IBM mainframe offering is the TS7700. This comprises a virtual tape grid system with synchronous replication and automated fail-over. The cluster currently has six nodes or eight nodes, depending on the model. The management is an integral part of the VTS and the intelligence is outboard of the mainframe servers. This means that the location and number of virtual volumes are virtualised, which will assist with content-based access. It is possible that IBM will add tape data indexing to the tape grid in future, outboard from the mainframe.
The IBM TS7720 is a VTH solution. It uses 3 TB SAS disks in a RAID 6 format, with a total potential capacity of 6,048 TB. It also has the option to backend this data with physical tape drives.
The IBM TS7760 is a VTA solution. It uses 8 TB SAS disks in a distributed RAID format, with a total potential capacity of 2,450 TB. The TS7760 always has backend tape drives and supports between 4 and 16 physical drives, and up to 496 virtual drives accessing up to 4,000,000 virtual volumes. If the TS7760 is in an 8 node cluster, then all these numbers can be multiplied by 8 to get maximum sizes. IBM intends to add a Cloud offload facility to the 7760.
All models can connect to TS1150 or 3592 tape drives in an IBM TS3500 tape library and support 16 Gbps IBM FICON.
The Oracle / StorageTek offering is the VSM-7. The new feature that Oracle has brought in with the VSM-7 is cloud connectivity. The backend configuration can be three tier, with a disk cache, a physical tape layer and finally a cloud layer. An alternative is to define a two tier system with just a disk cache that feeds into a cloud backend. Oracle claims that this will give you a mainframe virtual tape system with unlimited storage.
Oracle has adopted a modular approach to mainframe tape virtualisation, the four modules being: tape libraries, tape drives, virtual tape disk buffers, and software. This means that most of the logic and intelligence is still held on the mainframe. The VSM uses two Sun SPARC M7 servers for the disk cache and just comes as a 2 node cluster. It uses 8 x 16Gb FICON for tape drive connection and Ethernet IP for replication. The VSM uses triple parity RAID to protect the high capacity drives from failure. Up to 512 virtual drives can be configured.
The controlling software is z/OS host-based, and Oracle claims that this gives them enormous scalability, with up to 256 VSM systems as a single image.
The VSM supports both synchronous and asynchronous replication of virtual tape volumes within a clustered VSM environment with VSM4 and upwards. Synchronous replication means that the data must be stored in both VSM caches before the virtual tape drive completes the rewind/unload command and passes control back to the application.
Both Asynchronous and Synchronous replication modes can be used simultaneously in the same VTSS by assigning tapes to different SMS management classes.
Dell EMC offers a disk only virtual tape solutions that has the following components:
Between 1 and 8 Virtual Tape Engines or VTEs can be configured depending on your performance and drive requirements. z/OS sees these VTEs a set of IBM tape drives, and the VTE can emulate IBM 3480, 3490, and 3590 drive types without needing any application modifications. Each VTE can be configured with up to four FICON channels, so a fully configured DLm8500 with 8 VTEs can be addressed by a maximun of 32 FICON channels.
The backend data is stored on a combination of VMAX disk devices, Data Domain virtual tape devices or Cloud storage. Up to 2 devices can be configured, so valid combinations are:
one or two Data Domains with data deduplication
One VMAX All Flash system
One VMAX All Flash system and one Data Domain library
One Data Domain library and Elastic Cloud storage (ECS)
Replication to a remote site depends on what type of backend storage is configured. VMAX storage utilises SRDF while Data Domain uses internal replication.
Fujitsu sells the virtual tape system which was previously known as CentricStor, which was renamed to ETERNUS CS High End in 2009 (when the Joint-Venture ended and Fujitsu Siemens Computers became FUJITSU). Again in 2013 the appliance was renamed to ETERNUS CS8000. This supports a great range of existing mainframe and major Unix and Windows operating systems as well as major tape libraries, all in parallel with one single system if required. Furthermore ETERNUS CS8000 combines VTL and the NAS option to consolidate backup, archiving, compliant archiving and second-tier file storage in one appliance. It is possible to cluster two VTLs in different sites and mirror the disk cache between them, this giving robust remote disaster recovery.
The Eternus comes in several models, starting with a single VCT that supports just 2 real tape drives, to disk only models, ViS models for NAS support and large multi-purpose models that can support up to 112 real tape drives with a total capacity of 3,474 TB.
Optica Technologies offers the zVT family mainframe virtual tape solutions. They fully emulate physical 3490 and/or 3590 tape drives, and can connect to the mainframe via FICON or ESCON channels. The disk only storage repository for the virtual tapes can be internal to the zVT, or attached externally via NFS or Fibre Channel (model dependent).
The zVT-5000-iNAS is a fully integrated solution that includes NFS backend storage, providing up to 1 PB of storage repository in a single frame. The 5000-iNAS includes advanced features such as deduplication/compression, encryption, data resiliency and replication.
The zVT-5000-FLEX supports attachment to any customer-provided NFS/NAS or Fibre Channel/SAN storage. This allows an existing open systems storage array to also be used to support mainframe tape, allowing existing open systems storage and replication processes to also include mainframe tape applications.
The zVT-3000i is all-in-one mainframe virtual tape offering with 4TB of internal storage in a 2U chassis. Replication to a remote zVT is fully supported. Both the zVT-5000-iNAS and zVT-5000-FLEX support HA multi-node clustering when connected to external NFS/NAS storage.
Luminex MVTe is a disk-based mainframe virtual tape offering with highly available, high performance storage options that scale to petabytes of virtual tape capacity. The CGX virtual tape control units can emulate 3490 or 3590 devices support both 8Gb FICON and ESCON channels. The entire solution is mainframe application transparent.
There are a number of options for backend storage, including Luminex MVT, HDS and NetApp disks. MVTe’s CloudTAPE can also be used as a final tier for infrequently-accessed and long-term retention data, such as archives or additional copies of tape data.
Data replication is provided by Luminex Synchronous Tape Matrix (STM). This can provide synchronous mirrored writes to multiple storage systems, and also provide host I/O capabilities from any available storage system within the layer. Any MVTe control unit can service host I/O for any MVTe storage at anytime so operations can continue without interruption, even in the event of a site failure.
A nice feature here is the ability to do non-disruptive Disaster Recovery Testing. By selecting 'DR Start' from a GUI, the MVTe at the disaster recovery site will prepare a DR environment allowing read/write activity, without affecting the original data, and all without stopping replication from the primary data center.
Just for completeness, some vendors offer a software only solution. An example is CA with its Brightstor product.
The following table describes the Oracle VSM 6 system, an IBM TS7700, a Fujutsu CS8400, an HDS VTF a Dell EMC DLm 8000 and an Optica zVT 5000-iNAS. The table includes the terms used by each supplier to describe the components.
|Oracle VSM 6||IBM TS7760||Fujitsu CS8400||Dell EMC DLm 8500||Optica zVT 5000-iNAS||Luminex|
|Maximum number of virtual drives||512 on a single VSM, up to 256 VSM so 135,680 maximum||256-2048 (these numbers refer to 1 or 8 node clusters, the 7760 can have any number of nodes between 1 and 8 and so scales appropriately)||1280 per cluster||256 per VTE, so 2048 with an 8 node cluster||256 - 2048 (256 per zVT node x 8)||4096|
|Maximum number of virtual volumes in one virtual tape system||100,000 active in the VTSS cache, no effective limit for VTVs migrated to MVCs; Virtual Tape Volume (VTV)||4,000,000; Virtual Volumes||3,000,000; Virtual Volumes||unlimited||Over 1,500,000 virtual volumes supported in production. Unlimited support based on lab testing.|
|Back end storage||32 drives per VSM, called Real tape Drives (RTD) of T10000B or T10000C; Tapes are called Multi-Volume Cartridge (MVC)||128 tape drives of TS1150, TS1140, TS1130 or TS1200. Tapes are called "Stacked Cartridge"||112 tape drives in 10 Libraries, IBM, Oracle, DLT drives supported.||Backend is disk only and can be EMC VMAX, EMC Data Domain or EMC Cloud. See text above for details.||NFS/NAS storage with HYDRAstor technology||Different OEM Disk System and VTL are supported|
|Server Support||z/OS variants||z/OS variants||z/OS + various UNIX, LINUX and Windows||z/OS variants||z/OS variants||z/OS variants|
|Library Support||Oracle libraries||IBM libraries||Fujitsu, IBM, Oracle, Quantum libraries||Disk only, but can support Data Domain virtual tape libraries||up to 32 zVT Libraries per node and support for LTO 4,5 and 6 connectivity||No physical Tape. Different VTL Open Systems supported|
|Maximum disk cache capacity (native not compressed or deduplicated)||825TB native using 8TB HDDs. will scale up to 211 PB over 256 systems; Virtual Tape Subsystem Buffer (VTSB)||2,400 TB native; Tape Volume Cache||3,650 TB native; Tape Volume Cache||2,000 TB native with Data Domain storage||8TB - 1PB native in a single frame and scales to 11.88PB native||Support for a wide range of disk systems (FC / NAS / VTL) so depends on solution.|
|Controlling Software||HSC software runs on z/OS Host||AIX based Virtual Tape Controller software, within the VTS.||Eternus Software runs within the VTS.||EMC z/OS Storage Manager||zVT operating system (no special host software required). Integration and testing completed with z/OS tape management software and tools||Management Software runs on Gateway and external Appliance|
|Connectivity||8*16Gb FICON between host and tape drives.
4*Gbe and 4*FC for replication
|8-48*16Gb FICON to RTDs.||4-40*16Gb FC and FICON||32*16Gb FICON||2-16*8Gb FICON or 4-32 ESCON||2*8Gb FICON per Gateway Backend: FC or GbE|
|Scalabilty||As the controlling software and database is in the Host, up to 256 VSMs can be clustered together, and this will appear to be seamless to the z/OS operating system, so scaling from 1.2PB to 409PB assuming 4:1 compression. The backend tape drives are located in an STK siloplex, which has almost unlimited capacity||Two VTS systems can be defined to one TS3500 tape library.||Fujitsu states that the grid architecture makes the device extremely scalable, with the CS8200 being a scale-up system and the CS84000 a scale out system.||Limited to VMAX capacity, can be extended by interfacing to Data Domain libraries||Up to 8 zVT nodes in a HA multi-node Cluster||Given by attached Disk Systems Capacity or VTL Capacity and functionality|
|Mirroring capability||Either synchronous or asyncronous||A combination of aynchronous (tape rewind not issued until both virtual tape copies are written) and asynchronous, as selected by DFSMS policies.||Asynchronous replication and synchronous mirroring. The CS8800 provides comprehensive 2 site mirroring support.||Uses Synchronous SRDF to mirror between VMAX subsystems||Asynchronous replication for 5000-iNAS (Synchronous replication supported with 5000-FLEX and customer storage)||Gateway Replication|
|GDPS support||Full support, no additional scripting needed||Yes, VTS can be fully integrated into GDPS, so second site tape work would be 'frozen' if there was a problem at the primary site||No data||No, but utilises EMC's GDDR software instead||No||No|
back to top