Repostor Backups

Do you want to learn why Database backups are different from file backups? Click HERE to find out.

Do you want to learn why
Database backups
are different from file backups?
Click HERE to find out.

TSM Database and Recovery Log

TSM Database version 5.5 and earlier

As TSM 5.5 went out of support at the end of April 2014, this section has been removed from the main database management page, but still retained as there is a lot of useful information for anyone still running with the legacy database.

Recovery log processing
Database Defragmentation
Extending the TSM database under AIX
Formatting the TSM database and log
Auditing the TSM database
Database size and disk setup
Recovering a TSM 5.x Database on a Windows server.
Database and log Mirroring
Disk Storage Pools

Recovery log processing

The TSM database is quite sophisticated, and uses a transaction log, called the recovery log. Multiple updates are grouped together into 'transactions', which have to be applied to the database as a unit. If the server crashes, say, and the updates in a transaction have not been applied, then the partial updates must be backed out using the log records. This all or nothing approach protects database integrity during updates.

If the server cannot update the recovery log, because it is full, then the server crashes. So its worth knowing what makes the log fill up, and how to avoid it.

The log has two pointers, a 'head' pointer and a 'tail'. The head marks the position where the next update can take place, new updates are added at the head. The tail marks the position where the oldest transaction is still processing, and also where the last update can take place. Tail movement depends on how the 'logmode' is set up. If you define logmode=rollforward, then the tail will only move when a database backup is run. If you use logmode=normal, then the tail moves when the oldest transaction completes. When the pointers reach the end of the file, they start again from the beginning. Consider the logfile as being a circle, with the head and tail pointers being points on the circumference. The command Q STATUS will tell you which logmode you are using.

The tail is then 'pinned' by the oldest in-flight transaction, and if this is not cleared before the head catches up, then the file is full. Tivoli provided a new command with TSM, 'show logpinned', which will identify the transaction which is pinning the log.

The log file usually fills up due to a combination of two events. An old transaction hangs around and 'pins' the tail, while another process is causing the head to move rapidly, so it catches up.
Long running transactions can be caused by very large database backups, or smaller backups running over slow lines. A process which is trying to recover from a tape I/O error can also hang around for a long time.
Rapid head movement is caused by something which is doing large quantities of database updates, very fast. Expire Inventory is a good example of this. There are ways to manage this

Recovery log processing was enhanced in TSM 4.2. If the DB Backup Trigger is set correctly, and the LOGMODE is in ROLLFORWARD, then a database backup will start when the log reached 100% full. If the Recovery log hits 100%, then TSM will stop all processes except the database backup. When the backup completes, TSM issues the message

ANR2104I Activity log process restarted - recovered from an insufficient space condition in the Log or Database.

This should help us avoid some difficult recovery situations.

back to top

Database Defragmentation

This contentious issue applied to legacy databases only. The legacy TSM Server database has a b-tree format, and grows sequentially from the beginning to end. When file entries are deleted by expiration processing or file space/volume delete processing, this leaves spaces or holes in that database. These may be re-used later when new information is added, but they mean that the TSM database is using more space than it needs to. The only way you can compress the database so that the 'holes' left by deleted pages are not present is to use the database unload/reload utility.

The problem is that while the dump takes about an hour, the load utility can take several hours. Does it make a difference? I have seen performance improve after defragmenting a database, and I've also see an unload/reload make performance worse. A defrag will reduce the physical size of your database.

The Tivoli team supplied a new command with TSM 5.3 for to you to check to see what an unload/reload would achieve, called 'ESTIMATE DBREORGSTATS' This will estimate the amount of space that would be recovered by an unload reload.

For older releases of TSM use the QUERY DB to see if you need to defrag your TSM DB.

Available Assigned   Maximum   Maximum    Page      Total      Used   Pct  Max.
    Space Capacity Extension Reduction    Size     Usable     Pages  Util   Pct
     (MB)     (MB)      (MB)      (MB) (bytes)      Pages                  Util
--------- -------- --------- --------- ------- ---------- --------- ----- -----
   50,208   49,004     1,204     9,412   4,096 12,545,024 9,544,454  76.1  76.1

Here a 49GB database can be reduced by 9.4GB = 19%, but it is only 76% used, so 5% could be reclaimed by defragging. Some people claim that TSM tries to allocate pages in a way that leaves you with as good as possible performance, and defragging the database will degrade performance. Its also possible that after a defrag, the database will quickly become defragmented again, as it inserts data into the tree. The following formula can be used to see how much space could be reclaimed by an unload/reload.


A high PERCENT_FRAG value can indicate problems. If you think your database needs a defrag, then if possible, take a copy and try that first. That will give you an indication of how much time is needed for the load.

back to top

Extending the TSM database under AIX

Create a new file system in AIX using SMITTY

 	make LV
 	make FS on existing LV
 	mount new-filesystem


 	dsmadmc ... define dbv /new-filesystem/filename
 	dsmadmc ... extend db

If you use incremental database backups, then remember that after an EXTEND DB the next DB backup must be a full backup.

back to top

Formatting the TSM database and log

Legacy TSM database files and log files have to be formatted before they can be used. There are two different commands for this, and it is vitally important that you know the difference. If you want to add a file to the database or recovery log, then you use the DSMFMT command to format the file. The DSMSERV FORMAT looks similar but that command will format the whole recovery log and database. So just make things clear, DSMSERV FORMAT will wipe all your existing database and log files, so if you want to make a complete fresh start, that's what you use. DSMFMT will just format the file that you specify. The syntax of DSMFMT is

 dsmfmt -m -log tsmlogvol7 5

Which will format a 5 meg.log volume called tsmlogvol7. Size options are 'k' 'm' 'or 'g' and data type options are 'db' 'log' or 'data'

back to top

Auditing the TSM database

The Audit process only applies to legacy TSM databases.

Richard Sims has correctly pointed out that a database audit with FIX=YES is a dangerous procedure. "Correcting database problems without TSM Support direction can result in worse problems, including data loss. Structural problems and inconsistencies in any database system can be much more complex than a vanilla utility can properly deal with. If one has a reason to believe that their TSM database has problems, they need to contact TSM Support for assistance in dealing with them, rather than attempt amateur surgery. IBM repeatedly advises customers NOT to attempt to fix database problems themselves".
I'd also suggest that if you run an audit, you always make sure you have a full database backup available first.

Database Audits are used to fix inconsistency problems between the database and its storage components. A full database audit can run for several hours, but it is possible to run smaller audits on parts of the database. As a general rule of thumb, a full database audit takes about 3 hours per million pages, and a 4 GB utilised database holds about a million pages. The actual times will mostly depend on the processing power of your server. An audit will write a lot of log records so if you normally run with your recovery log in 'ROLL FORWARD' mode it is advisable to put the log into 'NORMAL' mode before running an audit, then put it back into 'NORMAL' mode when the audit completes.

/dsmserv auditdb fix=yes admin detail=yes 

Is a very quick check of the admin data

/dsmserv auditdb fix=yes archstorage detail=yes

will audit the archive storage, and runs for 1-2 hours depending on your database size

/dsmserv auditdb fix=yes diskstorage detail=yes

will audit the disk storage pools, and takes about 30 mins, depending on the size of the database, and how full the disk pools are. Best done when all the data is migrated out to tape.

/dsmserv auditdb fix=yes inventory detail=yes

This is the long running one, 8-12 hours.

The following information was supplied by Maureen O'Connor of Fiserv Technology in April 2007. Maureen has provided some excellent detail on how to estimate how long an aufit will take, and how to run audits against multiple TSM servers on one AIX server.

Running an audit of the TSM database can be a very long and time-consuming process, and it is not well documented by IBM, so estimations can be difficult to make.

Generally speaking, the best way to run the audit is to run it against the whole database, not just a piece of it, but if the db is very large, this can mean an extensive outage, so it should be planned well in advance.

The audit follows 33 steps:

  1. ANR4726I The ICC support module has been loaded.
  2. ANR0990I Server restart-recovery in progress.
  3. ANR0200I Recovery log assigned capacity is 1000 megabytes.
  4. ANR0201I Database assigned capacity is 2500 megabytes.
  5. ANR0306I Recovery log volume mount in progress.
  6. ANR0353I Recovery log analysis pass in progress.
  7. ANR0354I Recovery log redo pass in progress.
  8. ANR0355I Recovery log undo pass in progress.
  9. ANR0352I Transaction recovery complete.
  10. ANR4140I AUDITDB: Database audit process started.
  11. ANR4075I AUDITDB: Auditing policy definitions.
  12. ANR4040I AUDITDB: Auditing client node and administrator definitions.
  13. ANR4135I AUDITDB: Auditing central scheduler definitions.
  14. ANR3470I AUDITDB: Auditing enterprise configuration definitions.
  15. ANR2833I AUDITDB: Auditing license definitions.
  16. ANR4136I AUDITDB: Auditing server inventory.
  17. ANR4138I AUDITDB: Auditing inventory backup objects.
  18. ANR4137I AUDITDB: Auditing inventory file spaces.
  19. ANR2761I AUDITDB: auditing inventory virtual file space mappings.
  20. ANR4307I AUDITDB: Auditing inventory external space-managed objects.
  21. ANR4310I AUDITDB: Auditing inventory space-managed objects.
  22. ANR4139I AUDITDB: Auditing inventory archive objects.
  23. ANR4230I AUDITDB: Auditing data storage definitions.
  24. ANR4264I AUDITDB: Auditing file information.
  25. ANR4265I AUDITDB: Auditing disk file information.
  26. ANR4266I AUDITDB: Auditing sequential file information.
  27. ANR4256I AUDITDB: Auditing data storage definitions for disk volumes.
  28. ANR4263I AUDITDB: Auditing data storage definitions for sequential volumes.
  29. ANR6646I AUDITDB: Auditing disaster recovery manager definitions.
  30. ANR4210I AUDITDB: Auditing physical volume repository definitions.
  31. ANR4446I AUDITDB: Auditing address definitions.
  32. ANR4141I AUDITDB: Database audit process completed.
  33. ANR4134I AUDITDB: Processed 187 entries in database tables and 255998 blocks in bit vectors. Elapsed time is 0:00:10.

Each step is called based on the architecture; the DSMSERV utility runs several concurrently, 5-10 at a time, returning output as each step completes and picking up the next step in order. Steps 1-9 will finish almost immediately. Steps 10-16 will run next, and will take a slightly longer time, these follow definitions in order of creation. When Step 17 begins, it will trigger Step 33, and depending on how many entries there are in the database, the output from 33 will appear mixed with the output from Steps 18-32. Step 33 is reviewing all client data in the database, this is the longest running step in the audit process.

Typical output from Step 33 (from a large database) will look like this:

ANR4134I AUDITDB: Processed 8260728 entries in database tables and 0 blocks in 
 bit vectors.  Elapsed time is 1:05:00.
ANR4134I AUDITDB: Processed 9035641 entries in database tables and 0 blocks in 
 bit vectors.  Elapsed time is 1:10:00.
ANR4134I AUDITDB: Processed 9812999 entries in database tables and 0 blocks in 
 bit vectors.  Elapsed time is 1:15:00.
ANR4134I AUDITDB: Processed 10663992 entries in database tables and 0 blocks in 
 bit vectors.  Elapsed time is 1:20:00.
ANR4134I AUDITDB: Processed 11677212 entries in database tables and 0 blocks in 
 bit vectors.  Elapsed time is 1:25:00.
ANR4134I AUDITDB: Processed 12014759 entries in database tables and 0 blocks in 
 bit vectors.  Elapsed time is 1:30:00.

Note this output refers to 'entries'. Entries are not a standard reference in TSM, this is a parsed view of data files, part of the occupancy. To estimate how many entries will be scrolled through the audit, run this formula on a command line within TSM:

select sum(num_files)*3 from occupancy

The '3' refers to the three pieces to a file: the entry, a header for the entry, and an active/inactive flag. Remember that this is only an estimate, the reason for running the audit is possible corruption, there may be pieces missing or mis-filed.

Entries are read anywhere from 500K to 1 million every five minutes, so based on the output from this formula, this is how to estimate the time for the audit to complete.

Audits can be run on pieces of the database instead of the whole - a specific storage pool or the administrative portion - this can be a considerable time-saver, but if it is unknown what part of the database is corrupt, this may not be a worthwhile option.

To run an audit, the TSM server instance must be down. If there are multiple TSM instances on a server, the DSMSERV executable must be in the primary server directory, but if the audit is running on a secondary instance, for example, parameters must be passed to operating system so the utility will know where it is looking for the database:

AIX# export DSMSERV_DIR=/usr/tivoli/tsm/server/bin
AIX# export DSMSERV_CONFIG=/usr/tivoli/tsm/server/bin/<subdir>/dsmserv.opt

To run an audit just on the administrative portion of the database (the fastest, 10-15 minutes), start the utility this way:

AIX# at now
dsmserv auditdb fix=yes admin detail=yes > /tmp/tsmauditadmin.log

The process will run in the background, and a log will be kept; this log can be run with the tail -f command by multiple users to track the progress.

To run the audit on the archived data (1-2 hours, depending on size of archives), enter this:

dsmserv auditdb fix=yes archstorage detail=yes >/tmp/tsmauditarchive.log

To run the audit on the diskpool (very fast if all data is migrated), enter this:

dsmserv auditdb fix=yes diskstorage detail=yes >  /tmp/tsmauditdisk.log

To run on the client data only, not including the archives (still the longest running), enter this:

dsmserv auditdb fix=yes inventory detail=yes >  /tmp/tsmauditdata.log

Again, running on the inventory, while it can be run separately, it is almost a moot point.

If any data is found to be damaged, location messages as well as the fix (usually a deletion) will output to the log as follows:

ANR1777I afaudit.c(967: Object 0.62882489 is \WINDOWS\INF\DSUP.PNF for node
<NODENAME> (257), filespace \\<nodename>\c$ (1).
ANR1777I afaudit.c(967: Object 0.62882490 is \WINDOWS\INF\DSUPT.PNF for node
<NODENAME> (257), filespace \\<nodename>\c$ (1).
ANR1777I afaudit.c(967: Object 0.62882491 is \WINDOWS\INF\DVD.PNF for node
<NODENAME> (257), filespace \\<nodename>\c$ (1).
ANR4303E AUDITDB: Inventory references for object(0.62882489) deleted.
ANR4303E AUDITDB: Inventory references for object(0.62882490) deleted.
ANR4303E AUDITDB: Inventory references for object(0.62882491) deleted.

Be sure sufficient outage time is scheduled. Once an audit begins, it is not good practice to halt the process, because the current location of the audit is not truly known - a data file could be open, and halting may actually cause further corruption.

back to top

Legacy Database size and disk setup

The TSM database is critical to open systems backup and recovery. It needs to be 100% available as without it, it is impossible to recover files. The 'incremental forever' philosophy behind TSM means that it is impossible to build a list of files needed to recover a server without the TSM database. If the TSM database setup is not designed correctly then the database will perform badly and this will affect your ability to fit backups within the overnight window.

TSM performance is very much dependent on the size of the database. TSM performance suffers if a database becomes too large, but there are no exact rules on how big too large is. The maximum possible size for a TSM database is 530GB. IBM recommend 120 GB as a general rule, with the caveat that 'when expiration, database restores, and other Tivoli Storage Manager admin processes take too long and client restores become too slow, it is too big'. Database backup and Expire Inventory are both CPU intensive processes that can be used to indicate server performance problems in general. The only sensible answer to 'how big should a TSM database be?' is to let you database grow until these processes start to become an issue. Expire Inventory should really run within 12 hours and should be processing 3 million pages an hour or more. Backups should run in 30 minutes and process 6 million pages per hour or more, but these are just general rules-of-thumb. The actual size will depend on how fast your hosting server is, how good your disks are and what level of service you need to provide.

A TSM Database consists of a number of files, called 'disks'. As TSM will schedule one concurrent operation for each database disk it makes sense to allocate a lot of small disks, rather than a few large ones. A disk file size of 2 GB seems to be about right (The maximum possible size for a disk volume is 8 TB). IBM recommends that these database disk files be spread over as many physical disks as possible. This makes sense for low or mid tier disk subsystems, as this means that multiple disk heads can be seeking, reading, and writing simultaneously, but as high tier subsystems perform most of their I/O in cache this is less of an issue.

Most operating systems allow you to stripe files over logical and physical disks, or partitions, and recommend that this be used for large performance critical files. It is very difficult to get any kind of consensus from the TSM user community on the benefits of disk striping. For example to quote two users:-
USERA; 250GB! database on a high tier EMC DMX disk subsystem. Disk striping introduced and database backup reduced by more than half.
USERB; 80GB database striped on a mid-tier IBM FASTT subsystem striping removed and database converted to RAID5. No impact on database backup times, expire inventory run times or client backup times.

TSM will allocate a default database and log file during a AIX usually in the server installation directory /usr/tivoli/tsm/server/bin These default files should be deleted and re-allocated to your strategic size and location.

back to top

Recovering a 5.x database on a Windows Server

The basic steps you need to take to recover a legacy database are:
Prepare the files you need to do a restore
Format the database and logs
Restore the database
Sort out any storage pool issues


Obviously, you need a good backup of a TSM database, and you need to know the device class that was used for the backup. For illustration, we will assume the latest database backup is on a tape called T012456L and used a devclass called T_LTO3.
You also need a list of the database and log file names and sizes. If you use DRM then the best place to get this from is the latest prepare. If you don't use DRM, you can get this info when your TSM server is running with the commands
  query dbvol f=d
  query logvol f=d
That's all very well if you have a planned outage, but what if your database crashes and you don't have a prepare? You can still get the info with dsmserv commands, use
  dsmserv display dbvolumes
  dsmserv display logvolumes
Create two text files, one called DB.VOLS that contains the Database file names, paths and sizes and one called LOG.VOLS for the log files. The files should look like this, but use your own file names, paths and sizes. The file sizes are in MB, so this is a 20GB database.

 "H:\TSMDB\EXT01\DB01.DSM"  5000
 "H:\TSMDB\EXT02\DB02.DSM"  5000
 "H:\TSMDB\EXT03\DB03.DSM"  5000
 "H:\TSMDB\EXT04\DB04.DSM"  5000

 "H:\TSMRLOG\RLOG01.DSM"  4096
 "H:\TSMRLOG\RLOG02.DSM"  4096

Place these files in your c:\program files\tivoli\tsm\server\ directory


Navigate to the c:\program files\tivoli\tsm\server\ directory and run the following command. Note that the logs are described first, then the database, and that you need to say how many of each type of file you are formatting, so 2 log volumes and 4 database volumes.


When you run a DSMSERV FORMAT on a Windows server, it resets the registry entry for the TSM server, and this must be put back before you attempt the restore. Use REGEDIT and navigate to the correct registry entry for your server. If you just have one TSM server on this Windows box, it will be Server1, otherwise Server2-4 depending on which server you are working with. The Server1 key is HKEY-LOCAL-MACHINE\SOFTWARE\IBM\ADSM\CurrentVersion\Server\Server1 and you need to change the path entry from c:\progam files\tivoli\TSM\Server to Server1.


Navigate to the c:\program files\tivoli\tsm\server\ directory and run the following command. The tape name and devclass are the ones we found before we started the restore, you substitute your own names.


OK, now you have a copy of your TSM database as it was when it was backed up. Your problem now is that you may have data on your disk storage pools that is not recorded in your database, or your database will think data exists on disk that has been moved off. The database has also lost all record of any tape activity that has happened since the backup, so you need to get these two in step again.
Before you start TSM up, go into the dsmserv.opt file and add the lines

These three commands will prevent migration, client schedule and expire inventory from running. Now start your server in the foreground and run command DISABLE SESSIONS to stop clients from contacting the server.

Audit your disk storage pools using AUDIT Volume volume_name FIX=YES, and that will hopefully fix any problems, but you may need to delete and redefine your disk volumes, discarding faulty data, to get migration to run clean.
Audit your tape library, and that will let TSM know the current location of all tapes.
Check your latest saved volhist file for any tapes that have been deleted or updated since the backup ran. You will need to audit these tapes too. Once you complete the audits, back out the changes you made to the dsmserv.opt file, halt the TSM server, then start it normally and enable sessions again.

back to top

Database and log Mirroring

There are three levels of mirroring, Hardware controlled, Operating Systems controlled and TSM controlled.

Mirroring protects the database from disk failure, and also subsystem or site failure if the mirroring is between subsystems or sites. Mirroring also offers some protection from system failure as the chance that at least one of the mirror writes was successful is much higher. TSM mirroring can detect if a partial write has occurred then a mirror volume can be used to construct valid images of the missing pages. TSM mirroring can complement hardware mirroring. It is best to mirror both the database and the recovery log to optimise availability and recoverability.

If you are using automatic database or logfile expansion with mirroring, then this will place both the primary file and the mirrored file in the same directory, as only one directory path can be specified. This means that the primary file and mirrored file could end up on the same disk, so they will need to be separated.

This sounds obvious, but the mirrors need to be on different disks. It is possible to place them on the same disk and that would be pretty pointless. It is also possible to mirror to three ways as well as two ways. With three-way mirroring you get three copies of the data.

Hardware mirroring (RAID1)
Most disk subsystems support RAID1 mirroring, which is expensive as it needs twice as much disk, and will not detect logical errors in the data. All data is mirrored even if it is corrupt.
Operating System Mirroring
IBM state that disk striping is suitable for a large sequential-access files that need high performance. AIX supports RAID0, RAID1 and RAID10. RAID0 is not really a good idea, as if a logical volume was spread over five physical volumes, then the logical volume is five times more likely to be affected by a disk crash. If one disk crashes, the whole file is lost. RAID1 is straight disk mirroring with two stripes and requires twice as much disk. RAID10 combines striping and mirroring, and also uses twice as much disk.
If AIX is mirroring raw logical volumes it is possible for it to overwrite some TSM control information, as they both write to the same user area on a disk. The impact would be that TSM would be unable to vary volumes online.
TSM mirroring
Software mirroring just applies to the legacy database. If TSM is managing the mirror and it detects corrupt data during a write, it will not write the corrupt data to the second copy. TSM can then use the good copy to fix the corrupt mirror. TSM also mirrors at transaction level, and hardware at IO level. Hardware will always mirror every IO, but TSM will only mirror complete transactions. This also protects the mirror from corruption.

back to top

Disk Storage Pools

TSM will only perform one concurrent IO operation to every storage pool volume, so it is better to have a number of smaller volumes than a single large volume in your disk storage pools. Also, it is easier to relocate a small volume to a different disk pool if space requirements change. However, every volume will use one TSM processing thread, and the TSM server will crash if too many threads are allocated.

The normal process is to initially write backup data to disk, then move it off to tape. It is possible to copy the data to tape but not delete it from the disk pool, if the disk cache setting is set to 'yes'. The TSM server would then use the disk data for restores and delete it as space is needed for fresh backups. This would speed up recovery, but slow down backups as the TSM server has to do extra work clearing out data, and would also make the TSM database bigger as TSM needs to store two locations for recently backed up data. It is your choice, faster restores or slower backups and a bigger database.

body2 text
body3 text

back to top

Spectrum Protect pages

Lascon latest major updates