This section assumes that all backups go to tape. Its possible to backup to disk, but as the backup is in a propriety FDRABR format which cannot be read by anything except FDR products, there is probably not much point. A restore from tape is almost as fast as a restore from disk.
ABR creates a separate dataset for every disk dumped, unlike DFDSS which lumps all disks dumped in one jobstep into a single file. ABR will automatically stack those files onto one tape, you do not need to build complicated referback JCL to do this yourself. This self stacking mechanism means that FDRABR is not a good virtual tape candidate. If you are dumping by storagepool, and the pool contains a lot of volumes, you may want to write to more than one tape to speed the job up. You may also want to duplex tapes to create an offsite copy.
You can do all this with the correct set of TAPE DD statements as shown below. The TAPE1 tape is held locally, while the TAPE11 tape is going to a remote drive as specified in the UNIT parameters. This, of course, assumes that these esoterics are defined in the IOCP and match real tape unit addresses. These two tapes are duplexed copies of the same data. The TAPE2 TAPE22 are also a duplexed pair, but will concurrently dump a different set of disks.
//TAPE1 DD DSN=FDR1,DISP=OLD,UNIT=LOCAL
//TAPE11 DD DSN=FDR11,DISP=OLD,UNIT=REMOTE
//TAPE2 DD DSN=FDR2,DISP=OLD,UNIT=LOCAL
//TAPE22 DD DSN=FDR21,DISP=OLD,UNIT=REMOTE
The DSN names must be unique to meet operating system requirements, but they are overridden by ABR as specified below.
FDRABR uses a fixed naming standard for its disk backups. It is possible to change the high level index to something other than FDRABR when you first install the product, by changing the entry in the FDR Global Option Table, but once you start taking backups, the index is fixed. The naming standard is
vvvvvv is the volume serial of the disk volume you are dumping.
n is the copy number. ABR will always create a COPY 1 and will create COPY 2 if you add the TAPExx DD names.
gggg is the generation number, 0001-9999
cc is the cycle number, 00-63
For example, FDRABR.VCICS01.C1004502, is the second incremental backup (cycle 02) in the forty fifth full backup cycle (gen 0045), and is the COPY1 dump (C1) of disk CICS01 (VCICS01). This backup dataset will be catalogued, usually in your ABR catalog.
The example JCL above had
//TAPE1 DD DSN=FDR1,DISP=OLD,UNIT=LOCAL
The DSN=FDR1 is required by z/OS, but will be ignored by FDRABR and changed to the correct name for each disk dumped. The only restriction is to make sure the name you provide is unique between jobs and jobsteps, or z/OS will enqueue on it and delay your dumps. As mentioned elsewhere you can avoid this restriction by using a tape dataset name of &&temp.
Tape expiry is normally controlled by your tape management system, and ABR simply supplies retention parameters to it. Catalog control is most often used when working with tape management systems as ABR will uncatalog backup datasets when they expire. If you do interface with a TMS, you should set the 'Enable TMS' option in the Global Options' table.
Without a TMS, you basically have two options for controlling backup tape retention, you can either specify that the tapes are kept for a set number of days, or you can say you want the tapes kept until they become uncatalogued. One way to set the number of days is the use the RETPD JCL parameter for each TAPExx DD dataset.
Otherwise, you can let FDRABR control the tape expiration by setting default retention periods when you create the FDRABR model dataset for each disk. You can do this independently for the Copy1 and Copy2 tapes. Say you set FDRABR to keep three generations. When you create a fourth generation, ABR will uncatalog the copy 1 generation, and all its associated incremental backups. You then tell your tape management system to keep FDRABR.V*.** files under catalog control, and it will expire the tapes when FDRABR uncatalogs them.
This sounds ideal, but be aware that there is an issue with CA-TLMS, which retains a tape based on its controlling dataset, usually the first file on the tape. Now suppose you are running dumps with TYPE=AUTO and the tape contains a mixture of full and incremental backups. These are all created on the same day, and might be all expected to expire on the same day, but in fact if the first file is an incremental backup, it will expire when its associated full backup expires. This will be earlier than the expiry date of any other full backups on the tape, so backups will be lost.
An alternative is to let your tape management system control when the tapes expire, usually by putting a RETPD statement either in the JCL or on the ABR DUMP statement. When the tape expires, the tape management system uncatalogs all the tape datasets, and marks the tape as available for scratch. Once the tape dataset is uncatalogued, ABR considers it to have expired. You can manage the retention of second copy datasets independently, by using RETPD2 on the ABR DUMP statement, or a different RETPD= on the DISK11 DD statement.
There is one further problem with TLMS and FDRABR tape names. You can control specific tape names in TLMS with an entry in the Retention Master File (RMF), and this cannot contain wildcards. The backup naming standard is FDRABR.Vvolser.C1/2 etc. If you want to put an entry into the RMF to retain the tapes, or to move the copy tapes offsite, you cannot say FDRABR.V*.C2** If you want to manage tapes with the RMF, then you need a separate entry for every disk you are backing up.
When your tape management system scratches a tape, the data on that tape is uncatalogued, but it is still available until it is overwritten. However, once the tape is uncatalogued, FDRABR does not recognise it. It is possible to use it? Yes, and it can save the day, but you need to do everything manually. In your restore JCL, you need to put in the full dataset name, file number and correct unit and volser for the tape and backup dataset, then the restore should work.
FDRABR (or FDRINC as it is now called) backup tapes will almost certainly hold many tape datasets stacked on each tape volume. Depending on your backup strategy, these may be a mixture of weekly fully backups, a mixture of daily incrementals, or, if you use TYPE=AUTO then they will contain a mixture of full and incremental backups for several disks in a storage pool. This will allow you to use your tapes very effectively, but is a real problem if you are testing Disaster Recovery from tape as you will get severe tape contention. You will be restoring each DASD volume individually, and the restore jobs will all be trying to read the same set of tapes. Of course, if you have a real disaster and have to restore from tape, the problem is even worse.
FDRDRP can resolve this issue without requiring any changes to existing backup JCL. It co-ordinates restores that use multiple tapes by restoring multiple disks in parallel instread of doing each disk sequentially. This means that the restore job should take one pass down each tape, reading the backup files in physical order and passing the data to the correct disk restore subtask as required. Restore time improvements are quoted as between 50 and 80%. The sample JCL below will restore a set of 'PROD' production volumes onto a set of pre-initialised 'DR' disaster recovery volumes, reading the COPY2 tapes and using three tape drives.
//RESTFULL EXEC PGM=FDRDRP,REGION=0M
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *