An overview of the EMC DMX-3
A DMX-3 is a cabinet mounted storage system which contains up to 9 cabinets, consisting of one system bay and between 1 and 8 storage bays
The System bay contains the server, channel and disk directors, cache cards, power supplies and battery backup, and a KVM to communicate with the DMX.
The storage bays contain between 120 and 240 disk drives, with associated power and battery backup units.
The channel and disk directors, and the cache cards are all contained in a single card cage in the system bay. The central 8 slots are reserved for cache, or global memory cards. The eight slots on either side of the cache cards are used for channel cards, numbered 1 to 8 then 9 to 16. The cards are designated back end (BE) or front end (FE) depending on whether they connect to the disks or the channels. Slots should be added symmetrically according to a 'rule of 17' to reduce contention and maximise bandwidth. For example, slots 1 and 16 are both BE types, and should be added as a pair of disk cards so the slot numbers add to 17. Slots 8 and 9 are both FE type and would be added as channel cards, again adding to 17. 1,2,15,16 are always BE, 3,4,7,8,9,10,13,14 are always FE while 5,6,11,12 can be either.
The channel cards, cache cards and disk cards are all connected together with a direct Fibre channel matrix, making the architecture non-blocking. The DMX also has an independent communications matrix.
A disk enclosure can contain up to 15 physical disks, and every physical disk is connected to two disk directors. The disks are connected to each director through a Star/Hub FCAL loop, with each loop controlled by a Link Control Card (LCC).
The DMX supports physical disk sizes between 73GB and 500GB. These capacities will reduce after formatting. Each physical disk can be configured as a number of logical volumes, that is, the volumes as seen by the attached operating system. The number of logical volumes supported depends on the RAID configuration, but is between 160 and 255 per physical disk. The overall subsystem can support up to 3,800 and 64,000 logical volumes, depending on how many disk directors are installed, and the RAID configuration used.
EMC quotes different usable capacities for mainframe and open systems disks of the same size; for example a 146 GB disk has 136.62 usable GB for open systems and 144.81 for mainframe. The difference in capacities is partly down to the different way a Gigabyte is calculated. For open systems it is 1024*1024*1024 =1,073,741,824 bytes and for mainframe it is 1000*1000*1000 = 1,000,000,000 bytes. An open systems disk, formatted as FBA, actually contains more usable bytes than a mainframe disk formatted as CKD, even though the mainframe disk will be quoted with a higher GB capacity. As far as the operating system is concerned, open systems disks are usually formatted in 512 bytes blocks, with 128 blocks to a track and 15 tracks to a cylinder. Mainframe disks can emulate any type of 3390 with 56,664 bytes per track, or 3380-K disks with 47,476 bytes per track.
The DMX has four different types of IO operations
- Read Hit - data requested exists in cache and is transferred straight from cache to the host. This runs at electronic speeds
- Read Miss - data requested does not exist in cache and has to be read from disk into cache before it is transferred to the host. This runs at mechanical speeds.
- Fast Write - data is transferred from host to cache and IO complete acknowledge is sent from cache. Data is destaged from cache to disk later.
- Delayed Fast write - global memory is full so old data must be destaged from cache to disk to make room. The IO operation is queued at the Host while this happens. Once room is available the Host channel is reconnected to the cache and the data is transferred to cache.
The DMX will handle three flavours of RAID, RAID1, RAID5 and RAID10. RAID1
is usually two mirrored volumes, but on a DMX it can be between 2 and
4 mirrored volumes.
RAID5 can be configured as either 3+1 or 7+1. Every disk in the RAID set must be held on a different drive loop, so a 7+1 configuration requires a minimum for 8 drive loops. RAID5 must be either all 3+1 or all 7+1, the sizes cannot be mixed. The RAID5 stripe size is 64KB The RAID10 configuration depends on the client platform. With mainframe RAID10 the data is striped over 4 volumes using a 1 cylinder stripe then mirrored to a second set of four disks. With Open systems, the stripe size is 2 cylinders, and the configuration is called RAID1/0.