IBM Spectrum Protect Storage Pools

IBM Spectrum Protect supports a number of different pool types for storing backup data. These are:

Best practice for IBM Spectrum Protect Storage Pool disks

Take a look at the IBM Spectrum Protect database page for recommendations on database pool disks

The type of disk you use can make a difference for some pools. Use faster disks for the Active Logs, use SSD if you can afford it. Avoid the slower internal disks included by default in most AIX servers, and avoid consumer grade PATA/SATA disks too. Do not mix active logs with disks containing the DB, archive logs, or system files such as page or swap space. Slower disks for archive logs and failover archive logs can be used, if needed.
Cache subsystem 'readahead' is good to use for the active logs; it helps in archiving them faster. Disk subsystems detect readahead on a LUN by LUN basis. If you have multiple reads going against a LUN, then this detection fails. So several smaller LUNs are better than a few large ones, but too many LUNS can be harder to manage.

If you use RAID, then define all your LUNs with the same size and type. Don't mix 4+1 RAID5 and 4+2 RAID6 together. RAID10 will outperform RAID5 for heavy write workloads, but costs twice as much. RAID1 is good for active logs.
However it is very difficult to given generic rules about disk configuration as this very much depends on what type of disks you are using.

High end disk subsystems such as the EMC DMX, the HDS VSP and the IBM DS8000 have a very large front end cache to speed up performance, and stripe data in a way that makes it difficult to separate data by physical spindle. The IBM XIV takes this virtualisation to a higher level again. To get the best performance from these devices you want enough LUNs to spread the IO and get readahead cahce benefit, but not so many that they become difficult to manage. For the XIV, consider using a queue depth of 64 per HBA to get best advantage of the parallelism capabilities.

Don't stripe your data using logical volumes, let the hardware do the striping. As a rule of thumb, consider using 100GB volumes for DISK pools and 50GB volumes for file pools. Define the same number of volumes per LUN as the RAID type to make up the LUN, so for example with 4+1 RAID5, define 4*50GB volumes per LUN, then each LUN will use 250GB, with effective capacity of 200GB.

The Unix Tips section contains some detail on how to use VMSTAT and IOSTAT commands to investigate potential disk bottlenecks.

back to top


Using Cache on Disk Storage Pools

It is possible to speed up recoveries by keeping a backup copy on disk, after it has been migrated to tape. This is called disk caching. By default, caching is disabled on storage pools so the backups on disk are deleted as soon as they are migrated to tape. You need to enable disk cache by specifying CACHE=YES when you define or update a storage pool. Then the backup copy on disk will be kept until space is needed in the disk pool for new backups. Disk cache is useful for pools that have a high recovery rate.

The disadvantage of disk cache is that backups will take longer, as the backup operation has to clear space in the disk pool before it can copy a new backup to disk. Disk cache will also increase the size of the IBM Spectrum Protect database, as it needs to track two backup copies, one on disk and one one tape.

If you run a query storagepool command

Q stgpool backuppool

You see the following (partial) result

The pct Util (utilised space) includes the space used by any cached copies of files in the storage pool. The (Pct Migr (migratable space) does not include space occupied by cached copies of files.

Storage Pool migration processing triggers on the "Pct Migr" value not the "Pct Util" value as found in the query storage pool output. This can cause confusion as a Storage Pool can appear to be full when most of the data is cache data. You may then expect automatic migration processes to run, but they will not run until the "Pct Migr" threshold is reached.

It is not possible to duplex storage pools, that is to simultaneously write data to two pools. If you want a second copy of a pool you need to schedule the Backup Storage pool command

backup storage pool primary_pool_name backup_pool_name

This command will just copy new data to the backup pool, that is, data that was written to primary since the last backup command ran. If you want to know what volumes will be needed for a backup you can run the command with the preview option.

backup stgpool primary_pool_name backup_pool_name preview=volumesonly

IBM Spectrum Protect will write out the required volume names to the activity log. Search for message ANR1228I to find them.

back to top


Storage Pool Migration

IBM Spectrum Protect traditionally caches backup data in a disk pool to speed up backups, then it moves that data from disk off to a tape pool as part of its housekeeping routine. The data movement activity is called 'Migration' and this is carried out by IBM Spectrum Protect processes. However, you cannot run or control these processes directly, you manage them by changing the values of parameters on a storage pool. These are 'nextstgpool', highmig', 'lowmig', 'migprocess' and 'migdelay'.

It is blindingly obvious, but it you don't define a second storage pool in the Nextstgpool parameter, migration will never work. Traditionally this is a tape pool, or you may want to use low tier disk.
HIghmig and LOwmig control the triggers that start and stop migration. IBM Spectrum Protect will start migration processes when the HI threshold is reached, and stop it when the pool occupancy gets down to the LO threshold. If HI=100, then migration cannot start (or is switched off).
MIGPRocesses controls how many process can run in parallel, provided the other limits below do not come into play.
MIGDelay is the number of days that a file must exist in the primary storage pool, before it can be migrated to the secondary.

The way IBM Spectrum Protect works out how many concurrent migration processes to run can be a bit confusing. The maximum number of processes cannot be more than the MIGPR parameter above for any one storage pool. One obvious limit, if you are migrating to tape, is the number of free tape drives. This means that it's not wise to run migration alongside other tape hungry housekeeping tasks like reclamation.

IBM Spectrum Protect will also just run one migration process per node in the storage pool, so if you just have a small number of nodes backing up in a pool, that will restrict the number of concurrent processes. IBM Spectrum Protect will start with the client that is using the most space, and them migrate the largest filespace first. This is an especial issue for big Oracle databases, as the data is all owned by one node and so will be migrated by a single process.

If you want to clear out an old disk pool you can do this with migration commands. You could move the data to an existing disk pool, a new disk pool or to tape. If you are going to use a new storage pool, then you need to create that pool and add sufficent volumes into it. The process then would be:

Update your old storage pool so that the target pool is added the next pool

UPDATE STGPOOL pool_name NEXTSTGPOOL=new_stgpool

Set the himig threshold on the old pool to 100 to prevent any automatic migration processed from running, then migrate the data from old stgpool to the new stgpool by using a MIGRATE STGPOOL command with the low threshold set to '0'

UPDATE STGPOOL pool_name HI=100
MIGRATE STGPOOL pool_name LO=0

occasionally check the status of the old pool to see if it is empty

QUERY STGPOOL pool_name

Once the pool is empty, delete the volumes from the old pool, then the storage pool.

DELETE VOLUME volume_name
DELETE STGPOOL pool_name

back to top


Using Active-data pools to speed up restores

The most recent backup of any file is called the 'active' backup, and all older versions are 'inactive' backups. Many files are never changed after they are created and so are just backed up once by IBM Spectrum Protect. These older backups are stored on the original tapes, and as time goes by the active backups for a server or file system are mixed up with lots of inactive backups and are spread over loads of tapes. If you use tapes, then the problem with this is when you want to restore a file server or a large directory, then IBM SP has to mount lots of tapes and scan through them selecting the active files, and this slows the restore right down.

Active-data pools are designed to fix this issue. They are storage pools that contain only active versions of client backup data. Newly created active backups are stored in active-data pools, and as older versions are deactivated they are removed during reclamation processing. Active-data pools can be disk based FILE type or a dedicated tape pool. FILE type pools offer fastest restore times, partly because client sessions can access the volumes concurrently. Tape active copy pools are still beneficial, because the restore does not have to continually position the tape between inactive files.

Active-data pools should only be used for nodes that need to be recovered quickly in a disaster.

There are a couple of restrictions
Restoring a primary storage pool from an active-data pool might cause some or all inactive files to be deleted from the database if the server determines that an inactive file needs to be replaced but cannot find it in the active-data pool. As a best practice and to protect your inactive data, therefore, you should create a minimum of two storage pools: one active-data pool, which contains only active data, and one copy storage pool, which contains both active and inactive data. You can use the active-data pool volumes to restore critical client node data, and afterward you can restore the primary storage pools from the copy storage pool volumes.
The server will not attempt to retrieve client files from an active-data pool during a point-in-time restore. Point-in-time restores require both active and inactive file versions and for efficiency, tsm retrieves both active and inactive versions from the same storage pool rather than switching between storage pools.
A directory-container storage pool cannot be used as an active-data storage pool.
Archive and HSM migrate data is not allowed in active-data pools.

There are two ways to start using an active-data pool, either by command using the COPY ACTIVEDATA command, or automatically using the simultaneous-write function on the Domain definition. In either case, TSM will only use the active-data pool if the data belongs to a node that is a member of a policy domain that specifies the active-data pool as the destination for active data.

Before you can run with either method you need to define the active-data pool with a command something like this-

DEFINE STGPOOL ADPPOOL fileclass POOLTYPE=ACTIVEDATA MAXSCRATCH=1000

and the domain must specify an active-data pool like this-

UPD DOMAIN domainname ACTIVEDESTINATION=ADPPOOL

then assuming this domain normally writes to a pool called BACKUPPOOL, add the active-data pool to it

UPDATE STGPOOL BACKUPPOOL ACTIVEDATAPOOLS=ADPPOOL

now you would want to get any existing active-data copied into this pool by using the command

COPY ACTIVEDATA BACKUPPOOL ADPPOOL

then under normal processing, active data will be copied into this pool when backups run, and files that go inactive will be removed. You might also want to schedule a weekly copy command just to make sure all the active data continues exist in the active-data pool as time goes by.

back to top


Using Directory Container Storage Pools

IBM introduced a new type of storage pool in 2015 called a container storage pool. These pools were designed specifically to assist with data deduplication, as data can be either deduplicated at the client end, or as it enters the IBM Spectrum Protect server. This is more efficient than having to use post-process to deduplicate the data.

Container storage pools come in two variants, Directory Containers and Cloud Containers.
Directory containers combine some of the features of Disk and Sequential pools and try to avoid the disadvantages of both. For example, there is no need to run reclamation, and the pool is not fixed size anymore. A disadvantage is that you need to learn a new command set to manage them, as traditional commands like BACKUP STGPOOL, EXPORT, IMPORT, GENERATE BACKUPSET, MOVE DATA, MOVE NODEDATA and MIGRATE STORAGEPOOL do not work with containers.

Directory based storage pools are defined with the command

DEFINE STGPOOL poolname STGT=DIRECTORY

The DEFINE STGPOOL command has lots of new parameters for directory containers, mainly to do with using the Protect Storagepool function and the maximum size that the pool is allowed to grow to.
Some of the admin commands specific to container pools are:

The MOVE CONTAINER command does not move a container, it moves all the data from one container to another. It creates a new container for the move, so you must have enough free space in the pool to create a container which is the same size as the source container. Be aware that the QUERY STORAGEPOOL command will show the percentage of free data within a storage pool, but this includes any free space within the containers. So if a pool size is 100GB and the QUERY STORAGEPOOL shows that the pool is 75% utilised that does not mean that there is room for a new 25GB container.
Try the SHOW SDPPOOL command instead and look for the FsFreeSpace entry. This will show you how much free space exists in the file system.

There is no DELETE CONTAINER command, containers are deleted automatically once all of the data expires or is moved out of a container, and the REUSEDELAY parameter on the storagepool is exceeded.

There is no BACKUP STGPOOL command for directory-container pools, the PROTECT STGPOOL command is used instead. This command uses replication to copy the container data to a target server. You need to combine this command with the REPLICATE NODE command to fully protect your backup data.
The PROTECT STGPOOL command should be run before the REPLICATE NODE command, as it can repair any damaged extents in the data and will make replication run faster.

If a container is damaged, you can use the AUDIT CONTAINER command to recover or remove data. The REPAIR STGPOOL command can be used to recover damaged data extents from a replication pair.

back to top


Cloud-container storage pools

Cloud container storage pools store backup data in the Cloud, which can be offsite or onsite. Cloud Storage can be simply defined as storage space that is accessed by an internet URL. This obviously needs a high level of security to prevent unauthorised access, and IBM Spectrum Protect manages that access, and also data retention within the Cloud.

You can either stage the backup data on a primary storage pool, then migrate it off to the Cloud, or you can back up and restore data or archive and retrieve data directly from the cloud-container storage pool. The data can use both inline data deduplication and inline compression. The server writes deduplicated and encrypted data directly to the cloud.

Defining Cloud Container pools

You could define a cloud-container storage pool from the Operations Center with the Add Storage Pool wizard, but the command line is always my favourite. First you define the pool with the DEFINE STGPOOL command. Your cloud-container storage pools can be either:
On premises in a private cloud, which will cost more, but you will have better security and control over your data.
Off premises in a public cloud, which will be cheaper, but performance might be an issue, depending on the speed of the internet connection.

Next you define one or more storage pool directories by using the DEFINE STGPOOLDIRECTORY command. You should define each storage pool directory on its own file system. Avoid ext3 file systems on Linux, as it can take a long time to delete large files with ext3, use xfs or ext4 instead. Don't allocate your storage pool directories on the root file system, and don't allocate them on the same file systems used by the IBM Spectrum Protect database or the logs.

Once you define your storage pool directories, the IBM Spectrum Protect server will use those directories as a temporary local cache for the backup data, before moving it off to Cloud storage. This migration process is pre-configured and runs automatically. You need to make sure that there is enough capacity in those storage pool directories, as if they fill up, all backup operations will stop until more free space is available. The recommendation is to size the pool directories to be big enough to cope with a night's backup, after compression and deduplication.

Some Cloud Container pool Tips

If you audit your cloud storage pool you might see both damaged and orphaned data. A damaged data extent exists in the Spectrum Protect server database, but the data in the Cloud is either missing or corrupt. An orphaned data extent is the opposite, the data exists in the cloud service, but it has no reference in the server database.
You can clean up both damaged and orphaned data extents by running an audit with ACTION=REMOVEDAMAGED.

When you allocate a Cloud storage pool with the DEFINE STGPOOL command, you will not see a NEXTSTGPOOL parameter. A Cloud storage pool does not have an overflow facility, because it is not possible to determine when the pool is full.

You should not store data in a Cloud storage pool that you would not normally hold on tape. Two examples are VMware control files and Data Protection for SQL metadata files

Backup and restore performance will very much depend on the network between the Cloud and the target server. Newer releases of IBM Spectrum Protect do perform better as the support has evolved. S3 and Azure Cloud protocols tend to perform best.

The IBM Cloud Object Storage accessers have a default certificate, but these default certificates have a short expiration and when they expire, you may lose access to your backup data. The exact process you use to renew the certificates depends on your browser and the operating system your server uses. Consult your IBM SP documentation for the full process, but in outline you need to:
Use a web browser to get a copy of the certificate used by the object storage system.
Add the Certificate to the Key Store

back to top


Spectrum Protect pages

Lascon latest major updates

Welcome to Lascon Storage. This site provides hints and tips on how to manage your data, strategic advice and news items.