The need for File Sharing

Back in the good old days of mainframe computing, your customers went home at 18:00, so you brought your online systems down, spend a few hours backing up the data, then the rest of the night processing the overnight batch. At 08:00 you brought the online systems back up again, and all was well. You probably also had one production MVS machine with one CICS system, which was easy to manage.

Today, you will have several production systems that may need to share files, and new fangled products like the internet means you need the systems available all the time. This is where VSAM Record Level Sharing (RLS) and DFSMS Transactional VSAM Services (tvs) come in.

The fundamental issue here is data integrity. If several processes are accessing the data at the same time, its necessary to ensure that the data is consistent. Formerly, the only way to preserve data integrity was to use a combination of VSAM sharelevel parameters and JCL disposition parameters.

When a Batch job was updating a file, then you specified DISP=OLD in the JCL DD name for that file. What that meant was, I want exclusive access to the file while I have it open. No one else can write to it or read from it. You could not run batch alongside on-line systems as the entire file was locked, but data integrity was assured.

If you specify DISP=SHR, then several programs can access a file concurrently. You could then control access by using the SHRLEVEL VSAM file allocation parameter. However, VSAM did not actually manage concurrent access, it was left to the applications to ensure consistency.

Finally, several CICS systems can share VSAM files if one of them is designated a File Owning Region (FOR), and the others Application Owning Regions (AOR). When an AOR wants to access data it passes the request to the FOR, which provides file access to the VSAM spheres. However, this does not manage any work running outside of CICS.

This does sound a bit vague and error prone. There are some third party products which can help here, SYSB-II for vsam file sharing is one that springs to mind. Another option is VSAM-RLS.


VSAM RLS allows users on different LPARS or machines to share VSAM spheres with read and write integrity with serialization at record level. RLS requires a Sysplex as it stores the VSAM control blocks in a coupling facility cache. It also requires that all shared VSAM files be SMS managed. It is possible to run RLS on a single system to permit data sharing at record level, but it still needs a SYSPLEX running in monoplex mode with a coupling facility. VSAM RLS requires that every task that uses it has a backout log, which makes it ideal for CICS sharing. Also, it removes the need for a CICS FOR which is both a single point of failure, and a bottleneck.

VSAM RLS will also permit cross system sharing of VSAM buffer pools for VSAM clusters, as the buffers are moved to the coupling facility cache and can be accessed across all systems. Each buffer in the buffer pool contains a data control interval (CI) or an index CI and can boost performance of VSAM files that are shared between LPARs. Any I/O request that is satisfied from a buffer has full read and write data integrity.
You can use VSAM RLS to access a VSAM cluster from just one z/OS system, and so share data concurrently between several tasks or batch jobs in the same z/OS with read and write integrity. This approach produces better granularity than using share options one or two.

RSL uses a Buffer management facility (BMF) to manage the CI population of the RLS SMSVSAM buffer pools. The only influence that you have over how the buffers are managed, is by setting a couple of parameters in IGDSMSxx.

You specify the maximum size of the SMSVSAM local buffer pool using RLS_MAX_POOL_SIZE: Note that this is not an absolute maximum, BMF uses it as a target and can exceed it temporarily while flushing out inactive CIs. The default is 100 MB. You use RLSAboveTheBarMaxPoolSize: to specify the total size of the buffer pool that is above the 2 GB bar, and you can enter a global value, or tailor the value for each z/OS system as follows.

RLSABOVETHEBARMAXPOOLSIZE ({(ALL,size)}) RLSABOVETHEBARMAXPOOLSIZE ({(sysname1,size1;sysname2,size2; etc.)})

You control the amount of data to be cached by each VSAM RLS data set by assigning them different SMS data classes with different values for the RLS CF Cache Value. Valid values are:

You open a VSAM cluster in RLS mode by either specifying this in your program ACB macro, or by using the RLS JCL keyword. Options are

Sample JCL


You can specify RLS access for all files supported by CICS file control, except for the following:

Installing RLS

You need to install a few components to get RLS to work.
RLS requires an address space on every participating z/OS system, called SMSVSAM, which looks after the file sharing. These address spaces have VSAM data buffers associated with them for caching. All the RLS buffers are moved to the SMSVSAM address space. You start the address space automatically at IPL time with an RLSINIT keyword in the IGDSMSxx parmlib member, or manually with a


The SHCDS or Sharing Control Datasets are used for system recovery if any of the RLS components fail. They must have a naming standard of SYS1.DFPSHCDS.anyname.Vvolser, and must be placed on a single volume as identified by the Vvolser statement. An SHCDS is a linear VSAM file and you probably need three for resilience.

The Coupling Facility Cache structures are defined with the IXCM2APU system utility and are used store the shared VSAM file control block structure.

The Coupling Facility Lock structures are defined with the IXCMIAPU system utility and are used store the all locks currently held on VSAM records.

An SMS storage class that defines the cache set name, the direct weight and the sequential weight for preferred performance.

An RLSINIT Entry in the IGDSMSxx parmlib member to define some RLS timeout and pool size parameters.


VSAM RLS introduced some new SMS commands

Display coupling facility lock structure, which will report on lock contention


The D SMS SMSVSAM command can give you lots of detail about what SMSVSAM is doing. Some of the things that the ALL parameter will tell you is the SMSVSAM status over all the LPARS, and display the lock structures, the lock tables and which SMF records are in use.
You can limit the results to a subset of the data. For example the second command will display VSAM spheres in quiesce status for this LPAR only.


Change the status of a cache structure called 'cfname' to either allow it to store RLS records (ENABLE) or stop it (QUIESCE)


Allow or disallow the contents of a volume to be held in coupling facility cache.


There used to be a restriction that data sets accessed by VSAM RLS could not use dynamic volume count. This restriction was removed with z/OS 2.1.
Dynamic Volume Count (DVC) provides the capability to dynamically add volumes to an SMS-managed data set, for both VSAM and non-VSAM formats, without increasing the number of candidate volumes stored in the catalog.
During extend to a new volume, SMS checks whether the data set has a candidate volume entry in the catalog. If there is no candidate volume entry for the data set, and the number of volumes for the data set is less than the Dynamic Volume Count value, SMS adds a candidate volume entry using the ALTER ADDVOLUME interface to the catalog for the selected volume. Thus, the user application does not need to close the data set and perform ALTER ADDVOLUME to increase the volume count.

In order for your VSAM RLS data sets to take advantage of DVC, you must assign them an SMS data class with the following attributes:
- Space Constraint Relief = Y
- Dynamic Volume Count = x, where x represents a value 1 - 59


TVS builds on the RLS facilities by extending the control of VSAM sharing to batch jobs, and any other work which runs outside CICS control. SMStvs provides locking, logging, backout and 2 phase commit facilities, and just uses RLS locking. When an application requests a record for read or write (GET or PUT in programming terms) RLS locks out the records (not the whole file), and holds the locks in the coupling facility. As both RLS and TVS store data in the coupling facility, this means that the products require a parallel sysplex.

SMStvs complements RLS by adding recovery logs for non-CICS updates. The MVS System Logger can read both the RLS logs and the TVS logs and consolidate them, to provide a consistent view for roll forward or roll backward processing. The System Logger writes its logs to Disk or the Coupling Facility or both, depending on the configuration. SMStvs can have three types of logs, Primary, Secondary and Forward Recovery. It can also have a fourth log, called the 'log of logs'

SMStvs can recover in either direction. Every time a 'unit of recovery' is updated, SMStvs writes the image of the record as it was before the update to the undo log. These before images can be re-applied to get the file back to a previous state after a failure, and are deleted once a commit point is reached. SMStvs also writes a copy of a committed record to the forward recovery log, provided the file was defined with LOG(ALL). This log can be used to roll the file forward from a full file recovery to the point of failure.
All recoveries are managed by z/OS RRS (Recoverable Resource Services), which calls SMStvs to perform the backout. RRS will also manage backouts on behalf of DB2 UDB, IMS and MQ series. RRS uses a two-phase commit mechanism, to ensure that either all updates to all resources are committed, or no updates are made at all.

This stuff does not come free. Both RLS and TVS require that synchronisation points, or commit points be added to application programs. The unit of recovery is the set of all changes made to all resources between two synchronisation points. If these are not added, then each program runs as a single, huge transaction. This means that the undo log will be huge, and will probably spill out of the coupling facility onto DASD, and cause performance problems. Also, as records are locked for the duration of the transaction, then they are locked out for the whole job. Batch jobs running under SMStvs will take longer, due to the logging and locking overhead. You should consider re-designing your batch run to take advantage of the data sharing, by running more jobs in parallel.
Also, be aware once you start sharing VSAM files between CICS and batch, then you cannot use traditional batch recovery methods. If you back a file up at the start of a batch job, then restore the file when the job fails, you will back it out, but you will also back out any updates made by CICS.
Backout and roll forward applies to VSAM files only. Any updates made to non-VSAM files will not be backed out.

SMStvs introduces some new SMS commands

To display SMStvs information, including log data.


To stop or start transactional vsam



Enable backup while open for tvs


Permit transaction backout


Permit transaction backout and forward recovery


back to top

Managing VSAM files

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best