Since the advent of RAID protected disks, complete disk failures are a very rare event. It is years since I had to recover a failed disk. If you mirror your data between two data centres, then you should not even need to recover a disk after a site failure. However customers remain a problem. With depressing regularity, they delete their data, they corrupt their data, they 'lose' their data, and then they expect us to fix it.
The only way to guard against user error is to take point-in-time backups to another location, traditionally a nightly backup to tape. The problem is that the data is growing and growing, and the window available for backups is shrinking. So how do you get a good, clean backup of several terabytes of data in just a few minutes?
The answer is to take a disk-to-disk snapshot copy of your data first, then move the copy off to tape at your leisure. This is quite easy in an Open Systems environment. You can the disk-to-disk copy facility supported by your hardware to create a copy of a file system, then mount that file system either on another server, or on the same server with a different name. The data can then be copied over. See the Open Systems link for details.
If you hive your data off to the Cloud, you still need to make sure that your Cloud supplier has decent disaster recovery procedures and takes proper backups.
It is not this simple in a mainframe environment. Every online disk and every SMS-managed dataset in an z/OS sysplex must have a unique name, and the datasets must be catalogued. This means that if you take an image copy of a disk, then the target disk will have the same label as the source, and the files on it will all have the same name. You cannot bring the target volume online in the sysplex, unless you re label it, but even if you do this, the data will be unreadable as it will not be catalogued. You could vary the disk online to a different sysplex, but then if you copy the data to tape, the backup datasets will not be catalogued to the original sysplex.
It is possible, but complicated, to get round all these issues by re-cataloguing exercises. It would be much better if you could read the copy data direct from the off-line disk. There are two backup applications that can do this at present; FDRinstant from InnovationDP and DFSMSdss from IBM. DFDSS has a 'DUMPCONDTIONING' parameter that will read an off-line flash copied disk, but it will only work with physical full-volume backups which is very restrictive, especially as logical dataset restores are not supported. DFSMShsm does not support off-line flash copied disks. The FDRinstant product is explained in detail in the next two pages. FDRinstant theory explains how this works, and FDRinstant Tips provides some implementation hints.