The links below take you to sections within this document
Once you install your TSM Client software, you need to configure some option files before you can start backing up data. On Windows, this is the dsm.opt file whereas on Unix or Linux you need a dsm.opt file and a dsm.sys file. These files contain the connectivity information needed to get to the TSM server, some parameters that describe how the TSM client will run, and also what data will or will not be backed up.
Selecting and Excluding Data
Selecting and Excluding Drives and Mountpoints
It is possible to select and exclude data using client option sets on the server. This has the advantage of making all the servers in a group consistent, but it also means loss of granularity. As always the choice is yours, this section describes how to select data from the client side. You select the data you want to backup at high level, by using a DOMAIN statement in the dsm.opt file on a Windows client or the dsm.sys on Linux or Unix. Typical statements would look like this
DOMAIN c: d:
DOMAIN /etc /var /bin
The problem with these approaches is that you need to remember to update the options file if you add new drives or filespaces
will backup everything. you can then exclude filespaces that you don't want to backup like this
DOMAIN ALL-LOCAL -s:
DOMAIN ALL-LOCAL -/temp
back to top
Including and Excluding data
INCLUDE and EXCLUDE statments are used to select which data to backup and also to attach different management classes to selected data, or to encrypt the backups for selected data. There are four types of INCLUDE and EXCLUDE commands, INCLUDE, EXCLUDE, INCLUDE.DIR, EXCLUDE.DIR, INCLUDE.ENCRYPT, EXCLUDE.ENCRYPT, INCLUDE.FS and EXCLUDE.FS. These relate to including or excluding files, directories or filespaces. Syntax is
Note that the directory exclude does not end with a '\'. The inclexcl list is normally processed from the bottom up but EXCLUDE.DIR is processed before the regular INCLUDE or EXCLUDE statements so the position of EXCLUDE.DIR does not matter. However it is best to put the DIR processing statements at the bottom of the list to make it more obvious how the processing works.
If the data path you are describing includes spaces, you must include the full statement in quotes, i.e.
INCLUDE "C:\Program Files\Microsoft SQL Server\MSSQL\Backup\...\*"
The \...\ means include subdirectories. If you just coded Server\MSSQL\Backup\* then the include would only apply to files within the backup directory, and not any files in subdirectories.
Remember, the EXCLUDE statements must come before the INCLUDE
EXCLUDE \filespec\*.* will only exclude files which have a period in the name. If you want to exclude all files in a path, you need EXCLUDE \filespec\* In particular, this option will include both files with extensions and files with no extensions
The exclude.fs statement that will exclude an entire file system. It is more efficient than using a plain exclude statement. An exclude.fs suppresses any examination of the directories within the file system. An simple exclude that specifies an entire file system does not, so the TSM client will still read every file name in every directory within the file system. It will check each file name against the include and exclude statements. It will then decide not to back up that file (assuming the exclude for the entire file system is in the right place in the sequence of include and exclude statements).
You can also be very granular with the use of encryption by using the include.encrypt statement so the statement below will encrypt all backup data for the datadirect directory only.
The query inclexcl command lets you check your syntax. A standard q inclexcl will not display management class assignments, if you want to see them you need to use
q inclexcl -detail
If you use the backup/archive GUI then you can use that to check to see if expected files are excluded. Open the GUI, select Backup then navigate through the directories to get to files that you expect to be excluded. If your statements are correct, then these files wil be flagged in the bottom right corner witb a red circle with a line through it.
Another variant is to use ranges within EXCLUDE commands, for example, say you wanted to backup a large disk with three backup streams, but you did not want to have to change include statements when adding new directories (it would be more likely that you would not know when new directories were added, so they would be missed). You could define three clients, each with its own dsm.opt file, with exclude statements as shown below. You would need to ensure that the ranges covered every possible directory name.
Content of dsm_a.opt
Content of dsm_b.opt
Content of dsm_c.opt
You can use multiple wild cards, but only at one level in the file specification, so for example dsmc sel \opt\data\?file* will backup all files in the \opt\data\ directory that have exactly one alpha-numeric character before the expression 'file' then any number of characters after it. However the expression dsmc sel \opt\*\file* will fail with ANSA1076E as wildcards are used on 2 levels.
back to top
So now you have your options files defined and you can start running backups. You would normally schedule regular backups from the TSM Server, and this is discussed in the TSM Server section. You might want to test everything works first by running a manual backup from the client side.
Client command line or Gui?
On the client side I used to prefer a command line simply because you get much more control over backups and restores. In fact on a UNIX machine, the command line is probably the best option. If your client has multiple server stanzas, then you can invoke each stanza with the server name parameter. For example if you define a stanza for oracle RMAN and use a servername of oracle-backup in it, then you would start a client TSM session like this.
On Windows clients, I find the GUI by far the best for doing restores, and probably the command line for doing anything else, but it's all a matter of personal taste.
back to top
Running backups manually from the Client
You can run an incremental backup manually from the command line by simply starting up dsmc, then entering 'i'. This will run an incremental backup of all the domains in the dsm.opt file. dsmc i c:\program files\* -subdir=yes will do an incremental backup of that directory and subdirectories only. You need to be logged in as administrator or root to run this command, as you will need to have access to all the files. If you cannot get those elevated priviledges, then you need to run a one-off schedule at the TSM server, as detailed below.
Scheduling a one-off backup
TSM 6.4.1 introduced a new -absolute option to allow you to force a backup of all files, whether or not they have been changed since the last backup. -absolute will not backup any files that are matched by exclude statements and it is only valid for full or partial progressive incremental backups of file systems or disk drives. You can use it with snapshot differential backups if you add the createnewbase=yes parameter and also with journal-based backups if you also specify the -nojournal parameter.
What if a disk is not specified in your dsm.opt file? Say you have a DOMAIN e: f: statement, and you want to backup the d: A standard incremental will not do this, as the dsm.opt file does not include it, but you can override dsm.opt with a domain parameter on your command; dsmc i -domain=d: This adds the d: to the domain list, so the command will backup the d:, and the e: and f: too.
back to top
Using ASNODE for backups and restores
TSM backups often use Proxy Nodes, where a 'master' client stores data on behalf of a number of normal clients. You often see this with VMware, GPFS and cluster backups. Backups are assigned to the proxy node by using the ASNODE parameter in a backup, either by using an -asnodename=proxynodename parameter in the schedule definition, or by starting a dsmc commandline with a parameter '-asnode=proxynodename'. For this to work, someone must have already issued a 'grant authority' command that allows the proxy node owner authority over the client node backups.
If you use the asnodename option to back up a client, then be aware that parameters like TXNGROUPMAX and MAXNUMMPOINTS apply to proxy node, not the target node.
For example, you define a proxy relationship where a target node of NODE247 is related to PROXY1 as an agent node and you want to set TXNGROUPMAX to 12288 for the backups. If you set TXNGROUPMAX on NODE247 to 12888, but leave PROXY1 to default to 4096 then the backup will just batch up 4096 objects in a transaction. You need to update the target node PROXY1: 'update node PROXY1 TXNGROUPMAX=12288'
While scheduled TSM backups are normally taken using the root or an admin user, it is possible for any user to backup and restore files, providing that user can access those files. This is controlled by an OWNER field in the backups table, so if I ran a backup of my home drive with my CSISJA userid, the owner field would be set to OWNER: CSISJA. TSM would then know that I 'owned' those backups, and would let me restore them.
However, if I was to try that backup using the asnode parameter with command
dsmc backup /home/csisja/* -asnode=proxy1
then this process breaks down as the owner field is set to OWNER: *. If I then tried to query those backups from my CSISJA userid I would not see them and I would not be able to restore them as I was not the owner.
This is 'working as designed'. The reason given is that the purpose of the ASNODE parameter is to allow any client in a proxy node group to be able to access and restore the data from any other client in that group, and so individual file ownership is ignored. It should be possible to process these files using the fromnode and fromowner parameters as shown below
dsmc query backup /home/CSISJA/* -fromnode=proxy1 -fromowner="*"
back to top
Some Useful Backup Parameters
Controlling the amount of data going to the TSM scheduling and error logs
The DMSMSCHED.log and the DSMERROR.log are usually the first point of call when investigating problems. They are usually found in the CLIENT/BA/ or BACLIENT/ directory. TSM will update both these files every time it runs a scheduled backup and will record every backed up file. The problem is that if they are not controlled, the logs will quickly become too big to manage.
You have two parameters in your dsm.opt file that control the data held in these files, schedlogretention and errorlogretention. The default values are schedlogretention N and errorlogretention N, which means never prune the logs. Other options are
ERRORLOGRetention 7 D
which means keep the errors for 7 days then discard them, or
ERRORLOGRetention 7 S
which means after 7 days move the data to DSMERLOG.PRU. The schedlog retention syntax is the same. You can select how many days you want to keep your logs for. You can also add a QUIET parameter in your DMS.OPT file, which will suppress most of the messages, but this is not recommended as you lose most of your audit trail.
A further pair of parameters were introduced With the 5.3 baclient,
This parameters cause the logs to wrap, so when they reach the end of the file, they start to overwrite the data at the beginning. The end of the current data is indicated by an 'END OF DATA' record. The nn value is the maximum size of the log in megabytes, with a range from 0 to 2047. 0 is the default and means do not wrap the log.
back to top
Controlling how often 'files in use' are retried
This parameter is usually set at the TSM server for all clients and values are typically 3-5, with 4 being the default. This is fine for most of your data, but suppose you must take a daytime backup of a user area that you know contains several '.pst' mailbox files, that can be several gigabytes big and will probably be in use. If you need to retry all these files 4 times, before TSM accepts the failure and moves on, your backup will take hours. You can override this default for a specific client by adding the parameter below to your dsm.opt file, which means just retry files in use once.
back to top
Using Different Management Classes
One way to bind a set of backups to a different management class is to add an include statement into the client options file with a statement like
INCLUDE "C:\Program Files\Microsoft SQL Server\MSSQL\Backup\...\*" MCSQLBK
This means bind all files and files in subdirectories in the Backup directory to special management class MCSQLBK. If you add this statement, you will bind all previous backups of these files to the new management class. The '\...\' means scan subdirectories
The rebind happens next time a backup runs. This will work for every backup version of the file, not just for the active one. The file must be examined again to get a new backup class, so you cannot change management classes for files that have been deleted from the client.
Another way is to define a client option set on the TSM server that contains INCLUDE and DIRMC statements that binds all files and directories to the desired management class, then update the client node to use that client option set.
Finally you could define a domain and policy set that contains only the single management class by which you want to manage the client node data, then assign the desired nodes to that domain.
back to top
To compress or not to compress?
Client compression will use up CPU cycles on your client server. However, once the data is compressed, it will use less Network resource and will also reduce the I/O pressure on your TSM server. This used to be a big plus point, in the days of restricted bandwidth. However, these days, with gigabit ethernet, its probable that the CPU consumption on the client actually outweighs any benefit you might get from better network usage.
If your data is a good compression candidate (big Oracle databases can compress down to 20% of original size) then you might get a benefit from client compression. If your clients are CPU constrained, and you have a good network, then your backups will probably run slower with
You should also not use compression if your clients support file compression. If compression would make a file bigger than it was before, which happens if a file is already compressed, then the file transfer will fail and will be retried without compression. If this happens for lots of files you will suffer a real performance hit. You then have three options: don't use compression, set the COMPRESSALWAYS parameter to on, then a file will be compressed and transmitted even it if grows with compression, or just exclude problem file types from compression with EXCLUDE.COMPRESSION statements.
My impression is that most people run with client compression off these days.
If you are network constrained and want to try compression, then configure it for one clients of a given type (databases for example() and monitor the backup times and data transfer rates to see if it is speeding up your backups.
back to top
Selective Compression by Directory
If you want to compress some parts of a filespace but not others, maybe because some directories contain large files which should compress well, then try the following in the dsm.opt file
back to top
How do you get compression statistics for a given backup session?
From the Client side, when the backup completes, look at the SCHEDLOG (normally dsmsched.log) on the client and page down to the last paragraph. This will give (among other stuff) compression statistics for the backup in the line 'Objects compressed by n %'
From the server side, issue command 'q actlog' and look for the lines starting ANE4968I. These give you compression stats for each client.
Note that when you look at space occupied by backups, you see two numbers, %utilised, and %reclaimable. These two numbers do not add to 100%. This is because the reclaimable space includes 'holes' within Aggregates, whereas utilised space considers Aggregates as intact units.
back to top
Encrypting the backup data
It is generally considered that TSM backup data is secure, as it cannot be read without a copy of the database. However if you have a legal requirement for full data encryption then standard DES 56-bit encryption is available. When you turn on encryption, you will be prompted to create a unique key. Without this key, you won't be able to restore your data. It is very important that you keep a copy of this key someplace other than the computer that is being backed up. If you forgot the encryption key, then the data cannot be restored or retrieved under any circumstances.
To enable encryption you add an encryptkey parameter to the dsm.opt file on the client, and add include.encrypt and exclude.encrypt statements as required. TSM will not encrypt anything unless at least one include.encrypt statement is present.
The encryptkey options are -
With the prompt option you should see the following message every time you run a backup
User action is required, file requires an encryption key
And you need to provide the key twice
You should only see the password required prompt once, and the password is saved in the tsm.pwd file
The manual states that with the encryptkey option set to Prompt you will be prompted for the password on every backup or restore, but it in my experience it appears that TSM stores this password on the local client, probably in a file called tivinv/tsmbacnw.sig so you can then run further backups
or restores from that client without having to specify a password and TSM encrypts or decrypts the data as required. If you delete that sig file, or presumably try to recover to a different client, then you will be given a selection screen with the following options
1 Prompt for password
2 Skip this file
3 Skip all encrypted files
4 Abort the restore
Taking option 1, you will be prompted for the encryption password twice, then the restore runs as normal.
back to top
Specifying 2 or more servers from 1 client
The first question is, why would you want to run several servers in the same client? The answer is that you typically do this when you want to handle different parts of your client data differently, maybe for databases or maybe for shared resources in client clusters. In this case you would define some virtual TSM servers in the same dsm.sys file.
On a Windows client, if you want to define TSM servers, and you want to be able to specify either server from a TSM client, add the following lines to the dsm.opt file
servername s2 tcpserveraddress ip-address or domain-name
you then have a 'primary' server which you pick up by default, and you can invoke the secondary server using
dsmadmc -se=s2 etc
On an AIX client you would define a different dsm.opt file for each server, For example, suppose you want a basic client to backup the rootvg, an oracle client for database backups, and an HACMP client for the non-database data on the shared resource. You need three opt files which for example could be defined like this
Then in your dsm.sys file you would code
Note that the tcpserveraddress is the same for each 'server' and is the dns name of the real tsm server. If you make each server stanza write to a different set of logs then that makes it easier to investigate issues. Each of the three nodenames is defined independently to the TSM server, so they can be scheduled independently. You would also define two symbolic links for extra dsmcads, so each stanza can be scheduled independently like this.
ln -s dsmcad /tivoli/tsm/client/ba/bin/dsmcad_oracle -optfile=/tivoli/tsm/client/ba/bin/dsm_Oracle.opt
ln -s dsmcad /tivoli/tsm/client/ba/bin/dsmcad_hacmp -optfile=/tivoli/tsm/client/ba/bin/dsm_HACMP.opt
back to top
Compression space errors
Prior to a client sending a file to the TSM server, TSM allocates the same space as the size of that file in the TSM server's disk storage pool. If caching is active in the disk storage pool, and files need to be removed to make space, they are. But if the file grows in compression (client has COMPRESSIon=Yes and COMPRESSAlways=Yes), the cleared space is insufficient to contain the incoming data.
Typically, this results in an error - 'ANR0534W Transaction failed for session ssss for node nnnn - size estimate exceeded and server is unable to obtain storage'
This commonly happens where client compression is turned on, and client has large or many compressed files: TSM is fooled as compression increases the size of an already-compressed file.
The only resolution is to take client compression off.
back to top
TSM fails with out of memory errors on the backup client.
The first thing that TSM does when it starts a backup is to scan each filespace, then build a list of files that need a backup. It normally does all the calculations and holds the file list in memory. The amount of memory needed depends on the lengths of the filenames and paths. Sometimes 500,000 files can be a problem and sometimes TSM can cope with millions of files so it's hard to predict exactly when memory problems will start. If you are having problems with TSM running out of client memory then you have a number of options to fix it.
The easiest solution is to use Memory Efficient Backup then probably incrbydate, described later on this page. Other options include Journal Backups, Image backups, or on a UNIX system, define multiple virtual mount points within one file system. Each mount point would be backed up independently.
Two variants of Memory Efficient Backup exist;
The first method changes filespace processing so instead of scanning and building a file list for an entire filespace, TSM scans and processes one directory at a time. This method does not work if all your problem files are concentrated in one directory though, which can happen if someone switches on trace logging and forgets about it.
The second method scans the entire filespace, but holds the file list on disk.
To implement the first method, simply place a line
in your dsm.opt file (dsm.sys in UNIX), or if you are backing up by command, add a parameter -memoryef=yes
to use the second method, you use INCLUDE.FS statements, and tell TSM where to store the file list. For example
INCLUDE.FS e: MEMORYEFFICIENTBACKUP=DISKCACHEMETHOD DISKCACHELOCATION=E:\TSM_cache
INCLUDE.FS f: MEMORYEFFICIENTBACKUP=DISKCACHEMETHOD DISKCACHELOCATION=F:\TSM_cache
The first time that you use a disk cache it will require lots of disk space, potentially several gigabytes, but the following backups will use less space.
You can combine both methods, then filespaces that specifically have INCLUDE.FS statements as above will use the disk cache method, while all other filespaces will use standard memory efficient backup.
back to top
Backups overunning, TSM continually mounting tapes
If you run a backups of a server where the files are going to different management classes and some are directed to disk and some to tape, then you might see TSM continually mounting and dismounting the tape. The backup runtime will be much longer than expected and you will see lots of messages on the TSM server log like this, and lots of 'Waiting for mount of offline media' messages on the client schedule log.
... 03:40:55 ANR0511I Session 5555 opened output volume T21345.(SESSION: 12345)
... 03:42:29 ANR0514I Session 5555 closed volume T21345. (SESSION:12345)
... 03:43:07 ANR0511I Session 5555 opened output volume T21345.(SESSION: 12345)
... 03:43:08 ANR0514I Session 5555 closed volume T21345. (SESSION:12345)
... 03:43:10 ANR0511I Session 5555 opened output volume T21345.(SESSION: 12345)
... 03:43:12 ANR0514I Session 5555 closed volume T21345. (SESSION:12345)
The problem is that every time TSM switches from backing up data to tape, to backing up data to disk, the tape volume is closed and dismounted, so next time it wants to send a file to tape it has to re-mount the tape drive and open the tape volume again.
The resultion is quite simple, just change the tsm client so it keeps the mount point.
update node node_name keepmp=yes
back to top
Some special backups
Incremental by Date backups; faster but less secure
You would use this type of backup if your backup window is not long enough during the week, but you have plenty of time at the weekend. An incremental by date backup uses the last updated timestamp on a file to decide if it needs to be backed up or not. The problems is that this field is not one hundred percent reliable on open Systems data as some applications can update data without changing the last update field. A normal TSM backup will compare the attributes of every file with the current active backup, and if they do not match, will take a new backup. Incremental by date simply looks at the last modification date, so it is much faster, and uses less memory. The downside is that is might not backup every changed file.
Also, it will not expire backup versions of files that are deleted from the workstation, it will not rebind backup versions to a new management class if the management class has changed and it ignores the copy group frequency attribute of management classes
To run an incremental by date backup, you add the parameter '-incrbydate' in the 'OPTIONS' box in the 'Define Client Schedule' GUI window.
If you use incremental by date, then to ensure you do get a full client backup, and correctly manage file expiration and management class changes, you should plan to take at least one full incremental backup every week.
back to top
Adaptive Subfile Backups
What is the difference between Progressive Incremental and Adaptive Subfile backups?
TSM standard backups use progressive Incremental and IBM recommends that this type of backup should always be used where there is sufficient stable network bandwidth. A progressive incremental backup copies all files to the TSM store the first time a backup is run, then just copies changed files on subsequent backups. Older versions of changes files are retained at the TSM server depending on the management class settings, but when a file changes, the entire file is copied on the next backup run.
Adaptive Subfile Backup is used for limited bandwidth networks or when there is a limited connection. Examples include a modem, wireless, or mobile connection. It backs up only the parts of a file that have changed since the last backup, essentially incremental backup within the file. This reduces the amount of transfer time and data transferred over the network. The TSM Server stores a complete full backup of the original file as a base file, and subsequent changed parts of the file called deltas.
The information required to create these deltas is stored in a subfile cache folder in the \baclient\ directory at the TSM client. Files smaller than 1KB or larger than 2GB are currently not supported by subfile backup. As the base file is required to recreated the current file, it is not deleted from backup when it passes data retention requirements, but older delta files will be deleted from backup to comply with the management class policies.
You invoke Adaptive subfile backup by adding the parameter
in your dsm.opt file. You can also be selective in the same file and specify exactly what you want to be processed using subfilebackup using include and exclude commands.
The subfilebackup option does not work correctly for migrated files. If you use a combination of subfilebackup and non-subfilebackup for migrated files, your data might be corrupted on the server.
back to top
Using NetApp Snapdiff backups
Snapshots backups work well with TSM. You take an instant backup to disk, so you get a consistent copy of the data frozen at a point in time, while applications can continue to run and update the live data, without affecting the snapshot. You then use TSM to move that frozen data off disk and onto TSM backup media. NetApp snapshots use copy on write, take a look at the snapshot section if you want to understand how snapshots work in detail.
The TSM incremental forever philosophy has a drawback; TSM has to scan the filesystem every time a backup is run, to work out which files have changed and need to be backed up, and this can take some time. For NAS and N-Series file servers that are running ONTAP 7.3.0, or later, TSM can use a NetApp feature so NetApp tells TSM which files to backup, if a TSM backup is run using the -snapdiff option.
The first time you perform an incremental backup with the snapdiff option, a snapshot is created (the base snapshot) and a traditional incremental backup is run using this snapshot as the source. The name of the snapshot that is created is recorded in the TSM database.
The second time an incremental backup is run with this option, a newer snapshot is either created, or an existing one is used to find the differences between these two snapshots. The second snapshot is called the diffsnapshot. TSM then incrementally backs up the files reported as changed by NetApp to the TSM server.
After backing up data using the snapdiff option, the snapshot that was used as the base snapshot is deleted from the snapshot directory, provided that it was created by TSM.
On Windows systems, the snapshot directory is in ~snapshot. On AIX and Linux systems, the snapshot directory is in .snapshot.
There are a few limitations;
You must configure a user ID and password on the Tivoli Storage Manager client to enable snapshot difference processing.
The filesystem that you select for snapshot difference processing must be mounted to the root of the volume. You cannot use the snapdiff option for any filesystem that is not mounted to the root of the volume.
For Windows operating systems, the snapdiff option can only be used to backup NAS/N-Series file server volumes that are NFS or CIFS attached and none of the NetApp predefined shares can be backed up using the snapdiff option, including C$, because the TSM client cannot determine their mount points programmatically.
For AIX and Linux operating systems, incremental backup using snapshot difference is only available with TSM 64 bit clients.
Because TSM is not deciding which files to backup, there are also some quirks in the way include/exclude processing works. Normally, if you change your exclude definitions, then all the files that are not excluded anymore will be backed up the next time you run an incremental. However, NetApp knows nothing of this, so if your are running snapdiff backups and you change the exclude statements, then those files will not be backed up until they are updated.
There are some other reasons why backups might be missed.
If you use the dsmc delete backup command to explicitly delete a file from the TSM inventory then NetApp does not detect that a file has been manually deleted from TSM.
If you want to run a full backup, and change the TSM policy setting from mode=modified to mode=absolute, then this will not be detected by NeApp and an incremental backup will run.
If you delete the entire file space is deleted from the TSM inventory, this will cause the snapshot difference option to create a snapshot to use as the source, and run a full incremental backup.
To make sure that all these changes are picked up correctly, you need to create a new base snapshot by running a backup with the CREATENEWBASE parameter:
dsmc incremental -snapdiff -createnewbase=yes /netapp/home
back to top