FICON, or FIber CONnectivity, is the cabling method used to connect IBM mainframes to peripheral devices such as storage systems. FICON replaced an earlier fiber connection type called ESCON, which you might still see in a few sites today. FICON is a standard, Fibre Channel FC4 level protocol, unlike ESCON, which was IBM propriety. The main differences are that ESCON Channels use Circuit Switching, Half-duplex data transfers and need a dedicated path preestablished. They can only run synchronous data transfer with one operation at a time. FICON Channels use Packet Switching, Full-duplex data transfers and Packets are individually routed. When packet is sent, the connection is released and they use asynchronous data transfer with pipelined and multiplexed operations.
With a mainframe, you do not get any choice about connectivity, you have to use FICON. The decisions revolve around how many channels you need, which version of FICON you are using, and which extras can you use to boost your performance.

However before discussing FICON further, it is worth a quick overview of the z/OS channel architeture. The z/OS Channel Subsystem or CSS manages the flow of data between I/O devices and main storage. It is separate from the main CPU, and takes the strain of I/O processing away from the CPU, allowing it to concentrate on processing data. So the CSS includes dedicated I/O processors known as the System Assist Processors or SAPs, I/O Channel Paths, and the firmware that manages the I/O and handles I/O interruptions.
The CSS uses one or more channel paths or communication links to manage the flow of information to or from I/O devices and each channel Path has a unique identifier or CHIPID. However since the advent of virtualisation and channel sharing among LPARS, a CHPID no longer directly corresponds to a hardware channel. A hardware channel is now identified by a physical channel identifier, or PCHID. The CHPID number is associated with a physical channel port location (PCHID), and a logical channel subsystem. The CHPID number range is still from ‘00’ to ‘FF’ and must be unique within a logical channel subsystem.
The I/O devices are fronted by Control Units, which operate and control the I/O Devices themselves
Subchannels are used to connect to individual devices, one subchannel is provided for and dedicated to each I/O device.

The z/Architecture lets you configure multiple CSSes to make the whole system very scalable. This process of spreading the I/O workload lets a z/OS mainframe handle a very high I/O bandwidth, and is one of the reasons why mainframes are still extensively used today, despite being written off over 20 years ago.

A FICON link is basically a fiber optic cable, or to be more precise, a pair of optical fibers that provide two dedicated, unidirectional, serial-bit transmission lines. While data can only flow in one direction down a single fiber, in a fiber pair one optical fiber is used to receive data, and the other is used to transmit data. This is called a Full-duplex capability and follows the Fibre Channel Standard (FCS) protocol which specifies that for normal I/O operations frames flow serially in both directions, allowing several concurrent read and write I/O operations on the same link.

The data transmission speeds are:

These are maximum design speeds and the actual speeds you get will depend on many factors, such as transfer sizes and access methods used and will use the slowest suppported speed of the components in the link, Control Unit, FICON cable or FICON directors. These components will negotiate the fastest link speed.

FICON Evolution

FICON Express was an extension of the original FICON, and was designed to provide access to extended count key data devices, and FICON channel-to-channel (CTC) connectivity. Support for SCSI devices came next, then High-Performance FICON for z Systems (zHPF), designed to improve performance for workloads that transfer small blocks of fixed-size data. Typical workloads are DB2 databases, VSAM files, PDSEs and zFS file systems.
zHyperLink is designed to reduce I/O services time for read requests by a factor of 5. The zHyperLink Express feature works as native PCIe adapter and can be shared by multiple LPARs. On the z14, the zHyperLink Express feature takes up one slot in the PCIe I/O drawer or the PCIe+ I/O drawer, and has two ports. Both ports are on a single PCHID. On the IBM DS8880 side, the fiber optic cable connects to a zHyperLink PCIe interface in I/O bay. The zHyperLink Express uses PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes. It is designed to support distances up to 150 meters at a link data rate of 8 GBps.

FICON Express16S+

The FICON Express16S+ is the latest FICON incarnation. It is designed to work with Flash disks, and transfer data from those disks without being a bottleneck. In IBM test reports, working with smaller blocksizes, zHPF can transfer data at just over 300,000 IOs/second, which is 3 times faster than FICON Express16S. On a more representative mix of large sequential read and write data transfer I/O operations, IBM reported a maximum full duplex throughput of 3200 MB/sec (reads + writes) compared to 2560 MB/sec for the same test on z13 and FICON Express16S channels. You need Flash disks to take advantage of these speeds, and of course, your physical FICON network needs to be fast enough too.


High Performance FICON for IBM Z or zHPF is not a new release of FICON, but an enhancement of the FICON channel architecture. zHPF is designed to reduce the FICON channel overhead. by optimising the protocol and reducing the number of IUs processed. zHPF needs to be enabled in the FICON channel and the z/OS operating system. It works with FICON releases from FICON Express4 to FICON Express16S+. This means that FICON can run in one of two modes, 'command mode' for the legacy architecture and 'transport mode' for the zHPF architecture.
You can enable zHPF without needing an IPL with a SETIOS command


If you want to know if zHPF is enabled, run the command D IOS, ZHPF system command as shown below

IOS630I message:IOS630I ZHPF FACILITY 485

Modified Indirect Data Address Word

The original method of extracting data into memory was IDAW or Indirect Data Address Word. This was designed to work with the old OS390 CKD architecture. However when Extended Format (EF) datasets came along, they did not perform as well as non-EF datasets. The resolution was to use Media Manager (M/M), but M/M requires that data sets have fixed-length records and no keys and so is incompatible with IDAW.
The Modified Indirect Data Address Word (MIDAW) came to the rescue. It only works with EF datasets, but it can coexist with the CCW IDAW facility. MIDAW reduces the number of CCWS needed to process a block of data, and so reduces the channel overhead. It also allows page boundary crossing on either 2 thousand or 4 thousand boundaries, which permits access to data buffers anywhere in a 64-bit buffer space.
If you use MIDAW, your FICON links will not run faster, but they need much less control data to flow over the links, which makes the channel more efficient. IBM claims that if you use MIDAW, the channel is twice as efficient as using CCW IDAW

You can activate MIDAW with a SETIOS command, or with an entry in the IECIOSxx Parmlib member. The first method is dynamic, the second one requires an IPL.


zHyperlink Express

zHyperLink is a new short distance mainframe link technology, limited to 150 meters, designed for up to 10 times lower latency than zHPF. It provides a direct connection between the IBM Z platform and IBM DS8k Storage. It was designed as a collaboration between the DB2, z/OS operating System, IBM System z servers and DS8880 storage subsystem people to speed up DB2 transation processing, and active log processing. It will also improve VSAM read I/O requests.
zHyperlink wotks with the FICON infrastructure, it does not replace it, it interconnects the z14 CPC directly to I/O Bay of the DS8880, using PCIe Gen3 x 8 physical links and IBM Z I/O adapters. The result, as seen by DB2, is that DB2 I/O service times should be in the region of 20-30 Microseconds, rather than the typical 300 Microseconds for a simple I/O operation.
At the moment (2020), zHyperLink functionality is restricted to DB2 and the IBM DS8880 storage subsystem, but we do expect EMC and HDS to provide compatible support in time. We should also expect to see zHyperlink modified in time so that other I/O workloads benefit from it.

One reason for the performance improvement with zHyperlink is that it uses synchronous I/O rathrr than asynchronous I/O
Asynchronous I/O is interrupt-driven and so has a scheduling and interrupt overhead. As discussed above, a traditional I/O operation requires that control be passed around between SAPs and channel programs and when the I/O is complete, an I/O interrupt notifies the CP so that IOS can be run again.
Synchronous I/O, or polling driven I/O, allows the operating system to read data records synchronously, the CP directly issues the I/O request to the storage control unit through a zHyperLink connection. The SAP and channel subsystem are bypassed with the Synchronous I/O model. I/O interrupts and I/O path-lengths are minimized, resulting in improved performance.

To see if zHyperlink is working, use the D IOS,ZHYPERLINK command. Possible results are:
ENABLED,followed by a description of what I/O operation is can use zHyperLink, basically READ or WRITE
NOT SUPPORTED BY THE PROCESSOR means the hardware requirements are not met.

FICON Extras

FICON Dynamic routing

If you use FICON directors, then you will probably have links, or ISLs, between them. It used to be that once a traffic route was established between ports and domains, that route was fixed. Routes were alllocated in a round-robin fashion and in the worst case, it was possible for all the traffic to end up going down one ISL, with resultant performance issues. FICON Dynamic Routing or FIDR fixes this issue and makes sure the ISL links are shared more evenly. FIDR enables ISL routes to be dynamically changed based on the Fibre Channel exchange ID, which is unique for each I/O operation. Now, an ISL is assigned at I/O request time, so different I/Os from the same source port going to the same destination port may be assigned different ISLs.

WLM FICON Priority

WorkLoad Manager (WLM) is used to prioritse important workloads when shared resources are in short supply. One of the WLM policies is I/O Priority Management, designed control access to the I/O subsystems. However, for a properly designed I/O subsystem using Flash disks, FICON express and all the other goodies above, and especially if Super PAV in installed, should not see any I/O bottlenecks. For this reason, it might be best to have I/O Priority Management disabled.
About 75% have Dynamic Alias Management enabled. For most this is a moot point either way
About 90% have I/O Priority Groups disabled and/or have no service classes specifying I/O Priority HIGH. This may be about right, but for the 10% that are using it, we might question how much value this is actually providing


So what does this all mean for you? After all, if you have a mainframe you have to use FICON, so what choices do you have?
If you are still using spinning disks, then the FICON extras are unlikely to buy you anything in terms of performance improvement. However if you are using Hybrid, or all Flash storage, then your old FICON channels will probably be a bottleneck. What you need, is performance measurements, to see if you have a problem, and where that problem lies. Take a look at the mainframe performance page. It describes a product called EADM which monitors FICON channel usage, among other things.

The other side of the coin is that you might have too many FICON channels installed, and they are under utilised. This seems to be a problem for tape devices. EATM, described below, can help with this.

back to top

Enterprise Disk

Disk Protocols

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best