These links lead to sections in the text below.
FDRINC backups are volume based and a bit like DFDSS, both backups and restores run as batch jobs. However, unlike DFDSS, ABR automates recovery by recording the location of volume and dataset backups. You can backup the entire disk (a full-volume backup) or just those datasets which have changed since the last backup (an incremental backup). Typically you would do a full volume backup once a week, and incremental backups on the other days. Disk based backup and restore is examined in detail in the disk backups page
Backups are identified by disk, then 'generation' and 'cycle' numbers. A generation corresponds to a full backup, and a cycle is an incremental backup. The first full backup is generation 1 with a corresponding cycle of 00, subsequent full backups increment the generation number. The first incremental backup after a full is cycle 01, and subsequent incrementals can increment the cycle up to 63, when a full backup is forced. When a full backup runs, it resets the cycle count back to zero. The backup section in the next page described how to set ABR up to automatically generate full and incremental backups to your own cyclic requirements.
You can setup and maintain FDRINC with an ISPF dialog, or with a set of batch programs. You would do this to maintain your catalog, to enable disks for backup, set or change default parameters and others. Innovation recommends that you use the ISPF dialogs for this. I prefer the batch jobs and examples are given below. Why do I prefer batch? Because you can use pre-defined jobs and also because you have access to more parameters.
Every disk needs an ABR initialisation dataset, or ABR model DSCB defined on it before it can be backed up. This model dataset is just a VTOC entry, it does not use any space. You use it to set default values for each disk and for how many generations and cycles to keep. FDRINC uses it to record the latest backup generation and cycle, and also the expiration date of the latest full backup. The model dataset is normally called FDRABR.Vvolser, and is catalogued on SMS volumes, and uncatalogued on non-SMS volumes. It is possible to change the FDRINC HLQ, but you must do this before you start to use FDRINC.
Some people get nervous when they see the words disk and initialisation in the same phrase, and worry that this will wipe all data off the disk. An ABR initialisation does not affect any existing data, it merely sets the volume up for ABR backups.
You create the Model DSCB with the FDRABRM utility as shown below.
//STEP1 EXEC PGM=FDRABRM,REGION=256K
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
ABRINIT VOL=volser, you supply the volser of the volume you are initialising
This disk is initialised to keep three full backups and six incremental backups (GEN=3,CYCLE=6)
DISABLE=ARCHIVE means that the HSM migration facility of FDRABR is disabled.
DISABLE=SCRATCH is actually the default, and it means that datasets cannot be wiped from the disk with the SuperScratch utility.Innovation recommends that you just enable this option if you want to use superscratch on a volume, then disable it when you are finished. This prevents accidential deletion of lots of data.
ENABLE=OLDBACKUP means you can automate recovery from older backups as explained on the next page.
RESERVE=NO means that the disk will not be locked out while the initialisation job runs.
BPPASSACS is just required for DFSMS managed volumes, as it is important that the ABR initialisation dataset is placed on the disk that you ask for, and not on a different DFSMS selected disk. If you see a message like "fdr420 processing bypassed -- allocation of the abr model failed" it is probably down to a conflict with DFSMS.
You can also use the ABR ISPF dialog to initialise disks. This is the preferred Innovation method but this can be very time consuming if you need to initialise 1000 disks.
The initialisation process involves changing values of reserved fields in the F1DSCB entry for every dataset on the disk. FDRABRM will check to make sure that these fields are empty first, and if they are not the initialisation fails with an FDR420 'VTOC CONTAINS DSCBS WITH NON-ZERO RESERVED FIELDS'. You can make the initialisation happen by using the 'FORCE' parameter, but make sure that the data in bytes 103 and 104 in the F1DSCB is not valid first.
ABR records information about backups of each disk, and also every dataset. The FDRABR.Vvolser model dataset holds current cycle and generation information for each disk, and the date of the last backup. The FDRINC catalog holds information which relates that cycle and generation to tape datasets.
Dataset backup information is held in spare bytes in the VTOC entry for each dataset. Because every on-line dataset backup is recorded in a different place, this avoids the performance issues that you can get with a single backup control file. This information basically tells you which FDRINC Generation and Cycle numbers contain backups for this dataset. FDRINC can then relate that to the ABR catalog to find a specific tape dataset.
OK, that will work fine if the dataset remains on-line, but what happens when a file is deleted? The VTOC entry will be deleted too. If you install the ABR 'scratch exit' then this intercepts a file delete request, and writes out the backup data from the VTOC into a scratch record in the ABR catalog, before the file is deleted. This is illustrated in the gif below
By the way, the ABR catalog is just a standard ICF catalog. See the ICF catalog section if you want to know more about them. In fact, the ABR catalog and the Scratch catalog are logically 2 separate entities and while they are usually held in the same ICF catalog, they can be in separate catalogs. The ABR catalog holds backup records and the Scratch catalog contains scratched dataset records
The backup records in the ABR catalog are deleted automatically when backups expire, but the scratch records are not deleted from the Scratch catalog automatically, If you have the scratch exit enabled, then you need to run a maintenance job to delete the records. If you have a large, busy site, then its best to delete these records every day to keep on top of them, and even then it can be difficult to get the job to complete. It is possible to split the job up by dataset index as shown below
//STEP1 EXEC PGM=FDRABRCM,REGION=500M
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD *
It is also a good idea to occasionally run a purge of the backup records, to catch anything that has not been deleted automatically for any reason. The purge command will not affect any records for backups that are still current, and is simply
This backup location process relies on a dataset staying on the same disk. If the dataset is migrated and recalled using FDRINC HSM, then FDRABR sorts out the cataloging. However it is not a good idea to run FDRINC backup alongside DFHSM as then FDRINC loses track of its backups.
You have the ability in FDRABR to specify global options that consistently apply to all batch jobs. This can be very useful as a way of ensuring that mistakes cannot happen. For example if you specify in the global options that no compakts allowed on system volumes then if someone tries a compakt by mistake they will get an error. The global options and the protect lists are both best maintained through the ISPF dialogs. There are a lot of global options and they are fully described in section 90 of the FDR manual, so I'm just going to pick out three panels worth here.
Setting default FDRABR dataset names
From the main FDR panel, take option 'I.4.5' and you see the following panel
From this panel you can change many of the default dataset names used by FDR.
ABRINDEX is used to specify the high level qualifier for all ABR datasets, including backups and volume initialisation datasets. The default value is FDRABR and it is best to leave it like that, but if you need to change it, you do it here. You must decide what high level index to use, and change it here, before you start taking production backups. The value you specify must be a single index like FDRABR, you cannot specify FDR.ABR for example. This index must also be unique to FDRABR, it cannot be used by any other application.
When a dataset is deleted from disk, its backup information is preserved by adding entries to the ABR Scratch catalog. These entries must be unique and not the same as any online dataset so they are prefixed by a single character high level qualifier in the Scratch catalog. The default character is a '#'. For example if you delete SMP.PROD.TEXT from disk and the scratch exit is enabled then an entry for #.SMP.PROD.TEXT is added to the ABR catalog. You can change that default value with the SCRINDEX parameter. The value must be a single alphabetic or national character. Incidentally, this means that everyone must have RACF access to create datasets starting with #, and must have update access to the ABR catalog.
From the main FDR panel, take option 'I.6' and you see the following panel
The RESTORE ALLOCATION LIST is for non-SMS volumes. FDR will try to put a file back onto its original volume unless you use a NEWVOL parameter. This was a real problem for archived datasets in the pre-SMS days as the recall was automatic and if the original volume no-longer existed the recall would fail. The Restore Allocation List lets you tell ABR to replace an original volume with a new volume to allow these recalls to proceed. You add lines like
and all datasets that would have been recalled to TSO001 will now be directed to TSO013. Of course, in an SMS world, target volsers are ignored anyway and SMS picks the target, so this is not required.
You can also define 4 ABR Protect Lists here, which you use to specify volumes or datasets that you do not ever want to be processed by ABR functions. The functions are ARCHIVE, BACKUP, RESTORE and SCRATCH. The Backup protect list just prevents files from being backed up during incremental runs, the backup protect list is not honoured by full backups.
To add an entry to a protect list you select the list you want from the panel above, then the next screen is shown below, with an exclusion that would prevent any dataset starting CICP on volumes starting CICS from being scratched.
As the FDR modules will almost certainly be held in the system link list, you need to refresh the link list for your changes to take effect. You can do this by executing FDRSTART by taking the REFRESH command on the FDR Installation Options Menu, 'I.4'
There are a small number of datasets that you never want moved around on a disk. These include files like the VTOC, VOTCIX and VVDS that need to be at the start of a disk and datasets that are open but not reserved properly. You can specify these in the COMPAKTOR Unmovable Table and then they will never be moved, even if someone forgets to code them in a compakt job. Examples of datasets that should be excluded are JES PROCLIBS, SPOOL, Checkpoint datasets, CICS Journals, SYS1.BRODCAST, SYS1.MANx (SMF) DATA SETS, RACF datasets and the FDR/CPK program library. You can also exclude entire volumes from a compakt.
To add entries to the unmovable table, take option 'I.5' and you will see a similar screen to the protect lists above. Within the unmovable table you specify
DSN then fully qualified dataset name
DSG then initial dataset mask.
You can either exclude volumes from full compakt, or just from free space release. Use the syntax
Exclude volume TSO001 from all compakt operations
DSG FDRCPK.EXCLUDE. COMPAKT.VTSO
Exclude all TSO volumes from all compakt operations
DSN FDRCPK.EXCLUDE. RELEASE.VTSO001
Exclude volume TSO001 from all compakt space release operations
Exclude all TSO volumes from all compakt free space release operations