While it is possible to backup DB2 databases as ordinary files or disks, to do this you need to stop the databases, and so affect application availability. Image Copies work with the DB2 DBMS to ensure that all parts of a database are backed up consistently, and can be run with the database active and servicing applications.
Image copies are invoked using the DB2 COPY command. There are two kinds of image copies, FULL and INCREMENTAL. You can make full image copies of a variety of different DB2 objects, including table spaces, table space partitions, data sets of non partitioned table spaces, index spaces, and index space partitions. A typical DB2 copy statement to backup a single tablespace is simply:
COPY TABLESPACE database.tablespace
The COPY statement will write the backup to a single output dataset, no matter how many objects are copied in that run.
An incremental image copy is a copy of the pages that have been changed since the last full or incremental image copy. A typical COPY statement to backup a single tablespace looks like:
COPY TABLESPACE database.tablespace FULL NO SHRLEVEL CHANGE
DB2 Incremental copies can be merged with a previous full copy, to make a new full copy. There are also different levels of data sharing during backup.
SHRLEVEL REFERENCE guarantees a consistent database backup by locking out write access to the data. The data can be read alongside the copy.
SHRLEVEL CHANGE allows updates to run alongside the image copies. If a recovery is needed, then DB2 uses the image copy, plus updates from the logfiles to reconstruct the database.
The COPY statement updates the catalog table SYSIBM.SYSCOPY and the directory tables SYSIBM.SYSUTILX and SYSIBM.SYSLGRNX and when those tables are being backed up, they can lock out separate COPY jobs that are running simultaneously. If you must copy other objects while another COPY job processes the catalog or directory, then you should specify SHRLEVEL(CHANGE) to prevent these lockouts.
It is possible to take fast backups from any disks using FDRinstant . The problem with this is that FDRinstant does not interface with recognised image copy utilities. Traditionally, a Tablespace recovery has been performed using the RECOVER command of DB2 to automatically restore the Tablespace from the most recent Image copy backup. Updates from the DB2 Log are then applied to bring the Tablespace as up-to-date as possible. However, a variation of the RECOVER command (RECOVER LOGONLY) allows the application of just the Log records to a Tablespace that has already been restored outside of DB2 control - i.e. with FDR/ABR.
The RECOVER LOGONLY technique could be used as follows. First, restore the Tablespace from the ABR incremental backup system. Then, issue the following DB2 command to re-apply the log records.
RECOVER TABLESPACE Tablespacename LOGONLY
If a recovered Tablespace has Indexes associated with it, the DBA has the option of rebuilding the Indexes once the Tablespace recovery is complete. This process can be time-consuming. As a much faster alternative he can restore them from the backups made with ABR InstantBackup, and then update them from the Log using a RECOVER command, in much the same way that Tablespaces are updated. The relevant command is-
RECOVER INDEX indexname LOGONLY
There are two considerations -
With these considerations taken into account, recovery of Indexes from backups taken with InstantBackup is very quick and simple process.
If you have a second datacentre, and use PPRC (or equivalent) for remote replication of data, then you still need to take image copies. That's obvious to Storage Manager, but its surprising how many non-storage people think that replication does away with the need for backups. Apart from the obvious need to recover from individual database corruption, there is also a requirement to handle a rolling disaster. This happens when disks when disks do not all fail at once, so some updates fail, while others succeed. If this causes databases to become corrupt, then full recovery may be needed. PPRC and SDRF mirroring both support consistency groups, which safeguards against rolling disasters. See the article Remote Data Mirroring for details on PPRC and also GDPS, the product which helps prevent rolling disasters.
FlashCopy is an IBM product used by DSxxxx storage subsystems. All other mainframe storage disk providers have a Flashcopy equivalent that supports the Flashcopy command set. The product is described in detail in the FlashCopy section, but in brief it creates an 'instant copy' of a disk or dataset.
Older versions of DB2 could use Flashcopy but the process was outside of DB2 control. It was necessary to run a script that did a 'SET LOG SUSPEND' run the Flashcopy, then run a 'SET LOG RESUME'. Even though the Flashcopy was very fast, this process was not transparent to applications and did not replace image copies. It was mainly used for DR testing or environment cloning.
DB2 v10 can control the creation of image copies using the dataset level Flashcopy facility. Image copies are allocated by DFDSS as VSAM datasets and are always catalogued. A dataset is created for each partition or object that is backed up and some utilities can also create sequential image copies using an existing Flashcopy image as input. For FlashCopy image copies, if the object consists of multiple data sets and all are copied in one run, there is a FlashCopy image copy data set for each data set.
If you run the image copy with options SHRLEVEL CHANGE and FLASHCOPY CONSISTENT, then once the FlashCopy image copy is created the utility checks the logs for changes to the copied data that were uncommitted at the time that the image copy was created. Any uncommitted data that is identified in the logs is backed out of the image copy before the utility terminates.
For this reason, a FLASHCOPY CONSISTENT image copy uses more system resources and takes longer than a simple FLASHCOPY YES image copy.
Dataset level Flashcopies don't always work as expected so it is worth knowing the restrictions before your DBA comes round asking questions. The types of things that can cause problems are
No Flashcopy V2 disks available
The source dataset must not be a target in an existing Flashcopy relationship
The target dataset must not be a source in an existing Flashcopy relationship
The source dataset can exist in multiple Flashcopy relationships, but no more than 12
Target dataset attributes like CISIZE, CASIZE, physical record size and physical blocksize must match the source dataset, except that the CASIZE can be different if the source dataset is less than one cylinder
Both source and target dataset must be fully contained on the same physical control unit
Both datasets must be SMS managed
If any of these parameters are not met, then the image copy will run, but it will not use Flashcopy and so will take longer than maybe was expected.
IBM's General Parallel File System (GPFS) is a high-performance clustered file system that runs on AIX, Linux and Windows Server clusters. One of its many features is the ability to take snapshots of file systems and interface those snapshots to TSM for backups. DB2 Purescale uses Tivoli Storage FlashCopy Manager to create a second DB2 pureScale instance in an independent GPFS Backup Cluster, then use this backup cluster to send a snapshot of a DB2 pureScale database to TSM. The Production Cluster hosts the original file system and the production applications, which run apart from the backups and are not affected by them.
To successfully back up and restore a DB2 database in a DB2 pureScale environment, the data and log files must be in separate independent file sets within the same GPFS file system, OR, the data and log files must be in separate GPFS file systems.
The production and backup clusters must have the same number of members, bur you can create logical members on the same host, which reduces the number of hosts that are required for the backup cluster. TSM mounts the GPFS file systems that contain the snapshot backup on the backup cluster. Databases are security protected to prevent misuse, so permissions must be granted to allow the FlashCopy Manager to mount file systems on the backup cluster. The necessary commands are:
On the production cluster run the 'mmauth add' command to authorize the backup cluster to mount all the file systems that the production database is allocated on, then run the 'mmauth grant' command to grant permission to the backup cluster to mount the file systems of the database that are enabled for snapshot-based data protection.
On the backup cluster run the 'mmremotecluster add' command to add the production cluster to the backup cluster.