DS8880 Architecture

Internally, the DS8880 architecture consists of device adaptors that connect to the outside world, disk adaptors and enclosures for the data storage and a large memory cache to speed up access. The whole lot needs to be connected together and needs to processing power to run it.

Central Processor and Cache unit (CPC Complex)

Processing power is supplied by Dual 6 to 48 core POWER8-based controllers, which use 4-way Simultaneous Multi-Threading and so can handle up to 64 concurrent threads. All DS8880 machines have two processor complexes running as a redundant pair, so if one complex fails, the DS8880 will continue to run on the other complex.

Cache size depends on the CPU configuration, with a 16 way processor using a significantly larger cache than a 2 way. Cache is used to speed up IO operations, and the DS8880s use 3 types of caching algorithms
Sequential Prefetching in Adaptive Replacement (SARC), a self-learning algoritm designed to make sure that the cache is used efficiently
Adaptive Multi-stream Prefetching (AMP), which basically streams the data into the cache.
Intelligent Write Caching (IWC), designed to arrange writes to minimise head movement.
SARC, AMP and IWC all work together to optimise performance and cache usage.

Drive Enclosures

There are 2 types of drive enclosures, HPFE for Flash drives, and Standard enclosured for spinning disk

HPFE Gen2 (High Performance Flash Enclosure) can hold 400GB to 7.68TB high capacity flash drives. They are installed in pairs, with up to 16 pairs in an all flash DS8888 or 8 pairs in a DS8886 hybrid or 4 pairs in a DS8884.

Standard Drive enclosures hold 24 x 2.5 inch small form factor drives. Drive options range from 300/600GB high performance 15k rpm to 6TB 3.5 inch high capacity nearline drives. A maximum of 1,536 2.5 inch, or 768 3.5 inch drives can be installed if the base unit is augmented with 4 extension units.

Physical disks are installed in groups of 16, called disk groups or enclosures. Each enclosure contains two fibre channel switches, and every disk has two fibre connections, one to each switch. These switches are then connected to two device adaptors over two independent RIO-G loops. Disk enclosures are installed into the front and rear of the DMX, and two enclosures are then paired up to form four Array Sites. An Array site contain 8 disks, 4 from a front enclosure and 4 from a rear enclosure, so an array site will also span two RIO-G loops. Array Sites are numbered S1, S2 etc. n.b. there is no S0! This is illustrated in the GIF below

Internal Connectivity

All the components are connected together with PCI Express Generation 3 I/O enclosures which use point-to-point 8Gb/s PCI Express cables. Inter-processor communication is isolated from I/O traffic, and used RIO-G ports. Storage Disks are contained in DDMs and are connected using switched FC-AL. Each disk drive is connected to two device adapters (DA) and each DA has four x 8Gb/s paths to the disk drives. One device interface from each device adapter is connected to a set of FC-AL devices so that either device adapter has access to any disk drive through two independent switched fabrics.
The DAs also act as RAID controllers, responsible for monitoring and rebuilding the RAID groups

External Connectivity

RAID Ranks, Arrays and Extent pools

The disks that your servers see are virtual disks, and it takes quite a few steps to get from the physical disks to the virtuals.
The first step is to decide if you want to use encryption or not. Encryption is all or nothing, you cannot make parts of a DS8880 encrypted and parts not. To use encryption you have to define an encrytion group. Once you do that, you then format the physical disks in the enclosures up into RAID arrays. RAID options are either RAID6, 10 or 5. RAID 5 is not recommended by IBM, and if you really want to use it for large volumes, they will want you to sign a risk acceptance disclosure. Once you decide on the RAID type you then need to work out how many dynamic spares to allocate. RAID 6 options for 8 disks are 5+P+Q+S or 6+P+Q, where P and Q are the parity disks and S is a spare. RAID 10 options are 3+3+S+S or 4+4. Here, 3+3 means 3 data disks and 3 mirrors, so you only get 3 usable disks from 8. Some RAID arrays do not need spares because spares are global within the DA pair, so you need to work our how many spares you want, then calculate how to distribute them across the DA pair. In general, the DS8880 will want 4 spares per DA pair.

Next, you define your Ranks, which will be one for every RAID array. The Ranks are split into extents, and you have some control over how big the extents are.
A Large extent will be 1GB for Fixed Block format, or 1113 cylinders for CKD format (the size of a 3390-1, which is about 0.98 GB). Ranks names are allocated by the system and are called R0, R1, R2 etc.
A Small extent is 16MB for FB, or 21 cylinders for CKD (about 16MB).
Small extents use the physical capacity better, but large extents are much faster to allocate. Performance is the same for both. These RANKS are then allocated to Extent Pools. Each extent pool must be either all large extents or all small extents, and either FB or CKD, so you have 4 different types of extent pool.Ranks should ideally be allocated to Extent Pools so that the capacity is equally shared between Server0 and Server1. Ranks, Array Sites and Arrays are independent until the Rank is placed into an Extent Pool. It is then bound to the server associated with that pool.

You need an absolute minimum of two Extent Pools per DS8K, and in theory you can have as many Extent Pools as you have Ranks. In practice, if you place more than one Rank into an Extent Pool, then you can stripe the data over the Ranks to improve performance. The IBM recommendation is to place between 4 and 8 Ranks in an Extent Pool. If you allocate all the space in an Extent Pool to volumes, then add an extra rank for more capacity, you will not get the benefit of striping as there is only one rank to strip across. Always try to add at least 4 ranks to a full pool, or to a new pool, so striping can be effective.

If you want to use space efficient volumes, then you must define a repository for them within an Extent Pool. Only one repository is allowed per Extent Pool and once defined it cannot be resized, except by deleting all the space efficient volumes, then deleting and reallocating the repository. Space Efficient volumes is a chargeable option.

Volumes, LCUs and Disk Groups

Ok, so now we have a few Extent Pools all containing lots of large or small extents. How do you present those extents to a server? A logical volume is made up of a set of extents from one Extent Pool. You can define a volume to be almost any size you want, but the space is added internally is fixed increments. This means that for large extents you should make the total size of the logical volume to be a multiple of 1GB for FB or 3390-1 for CKD to prevent space wastage. Up to 64K logical volumes can be defined

In a mainframe environment, all the extents in a CKD volume must be contained in one Extent Pool, but can be striped over several Ranks. A CKD volume can be any size between a 3390-1 and a 3390-A (1113 to 262,668 cylinders) provided your z/OS release supports this.

The point behind this apparently complicated setup is that it makes the storage environment more flexible, especially for future maintenance. With as ESS, if you wanted to resize volumes, you had to empty out an entire array, delete it then redefine the whole array. The concept of breaking a CKD volume space into fixed size extents makes it possible to delete one volume from within a range, then add it back in again as a different size. You can also convert a 3390-9 to a 3390-27 in place by simply adding more extents to it and running and IDCAMS REFORMAT ... REFVTOC

In an Open Systems environment, a logical subsystem looks like a SCSI controller with up to 256 associated LUNs. Up to 256 Logical Subsystems can be defined with even addresses associated with Server0 and odds with Server1.

The DS8880 supports several Open Operating Systems, including Windows, Linux, AIX, Sun Solaris, HP-UX, VMWARE and Open VMS. Each of these operating systems has its own quirks, so the best thing is to look up the latest data for your own OS.
'Easy Tier' software is used to dynamically move data within the three storage classes, depending on how active it is at the present time. Data can also be placed manually.

back to top

Enterprise Disk

Disk Protocols

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best