This page contains tips for both IBM ESS and STK SVA/V2X devices
Shark Configuration and Maintenance Tips
If you buy an ESS from IBM they will configure it for you, or you can configure it yourself through Storwatch, which is a web browser application. The configuration can be very flexible, so if you configure the boxes yourself, then its easy to get carried away. I'd suggest you decide up front on a standard configuration for each array and stick to it. You will need to do this, even if your vendor does the configuration for you.
The main problem is that it is hard to get the configuration to work
out in nice, round hex. numbers. The next problem is that if you are
using ESCON channels, it is easy to run out of addresses, as you are
only allowed 1024 per channel. For example, if you have a subsystem
with 16 LCUs, and 8 channels to a CEC, then you will probably assign
4 channels to each cluster, and daisychain 8 LCUs on each channel. If
each LCU has a single 210GB array, then it can contain 72 3390#3 volumes.
Suppose you add 1 PAV alias per base address, then you have (72+72)*8
= 1152 addresses, which is too many.
It would be best if the UCB total was a factor of x'FF', then you can add multiple subsystems in the same address range, and not waste addresses. Reasonable factors are x'2F', x'3F' and x'7F', which allow 4, 3 and 2 subsystems per range respectively.
A recommendation is
- Make your standard disk allocation 3390#27 to minimise UCBs. There should be no performance overhead if you use PAV on model 27 disks. If you are uncomfortable with these big disks then use model’9’s. How many PAV aliases do you need? That depends on how busy your disks are, but IBM recommend something between 0.75 and 1.5 PAV alias for every mod9 disk when using dynamic PAV.
- Add 'small' disks to use up the spare array space
- Pad the addresses out to a round hex address with PAV aliases
Configuring PAV Aliases
What is PAV, how does it work? This is explained in the PAV section. This page just discusses how to configure PAV aliases on an ESS device.
Allocating PAVs is reasonably straightforward. The only issue is that your ESS configuration must match your HCD, or IO gen.
There are a limited number of points at which the PAV aliases can start. The permitted PAV start points are x'00', x'3F', x'7F', and x'FF'. You can't start at a higher PAV address than that defined in the HCD, as the addresses won't match up, and so won't work. You need to start at the highest defined PAV address, or at the next PAV entry point below
If you start your PAVs within the PAV address range, then aliases are
defined from the entry point, down to the highest real disk. If there
is not enough space for all the aliases, they then start from one above
the entry point, and count upwards. The diagram below illustrates 40
PAV aliases starting at x'3F', defined after 32 base addresses.
However, if your PAV definitions do not match your HCD, problems can happen. Say you define 32 real disks in your HCD, but you only configure 30 disks on the ESS through Storwatch, and leave the last 2 'for later'. If you then define 40 PAV aliases starting at x'3F', 32 will be assigned downwards to address x'20' but the last 2 will not stop at the PAV address define in HCD, they will be assigned in the 2 'spare' slots down to address x'1E' in the base range.
PAV will appear to work fine, but suppose you try to initialise the
UCB at address x'1E'. You can initialise it, as its defined to HCD as
a base address, but it is actually a PAV alias, and it could be pointing
to address x'04'. You would actually initialise the x'04'disk! This
is illustrated below.
A safe and simple way is to assign a range of addresses from xx00 to xxFF to each logical subsystem, and always start your PAV aliases off at xxFF. If you only allocate 32 real disks, then your ‘real’ addresses will run from xx00 to xx1F. If you also assign 32 PAV aliases, then your PAV addresses will run from xxE0 to xxFF, leaving addresses xx20 to xxDF free on that subsystem for expansion. That is a lot of spare, unassigned addresses, but it does mean that no HCD changes will be needed to add more capacity later.
Another possibility is to assign a range of addresses from xx00 to xx7F to each logical subsystem, and always start your PAV aliases off at xx7F. If you only allocate 32 real disks, then your ‘real’ addresses will run from xx00 to xx1F. If you also assign 32 PAV aliases, then your PAV addresses will run from xx60 to xx7F, leaving addresses xx20 to xx5F free on that subsystem for expansion. Not so many spare addresses, but if you want to expand the addresses to xxFF later, then you will need an HCD change and to delete and redefine the PAV aliases.
Be careful when deleting PAV aliases. If you delete an active alias, the active IO will fail, and will be re-driven through the base address. You'll get an "IOS017I ALIAS DEVICE dddd is UNBOUND" error message, but the application will survive. However, some system data cannot re-drive its IO. If you remove an active alias from a JES spool volume, JES will hang. An IPL will fix the problem. If you remove an active alias from a coupling facility dataset, it will be dropped. Its best to remove PAV aliases from system volumes with the systems down.
StorageTek SVA and V2X tips
Errors due to low MIH setting
Lots of strange errors may occur if the MIH is set to a default value of 15 seconds. This does not allow the subsystem enough time to complete a warm start after a check 0. The solution is to set the MIH to 4minutes and 10 seconds for all systems attached to SVA or V2X subsystems.
If you have SVAs, you will be using IXFP to manage them. You can get a PTF which will check the MIH values for Iceberg defined devices. This PTF will cause IXFP to issue a scrollable message to the operator console for each Iceberg device with the MIH value set to less than 5 minutes. The message is issued at IPL time, or whenever an Iceberg device is dynamically added.
To manually display the MIH setting use the command