IBM recommends that the SAN infrastructure containing an SVC is carved into three types of zone, Host zones, SVC node zones and Storage Zones. IBM also recommends that an SVC be connected to Hosts and Storage with a dual-fabric SAN for resilience. Tip - if you get any problems with your SVC SAN zoning you can use the lsfabric command to display the connectivity between nodes and other controllers and hosts.
Internode (or intra-cluster) communication is critical to the stable operation of the cluster. The ports that carry internode traffic are used for mirroring write cache and metadata exchange between nodes/canisters. The SVC node zone should contain the Master Console and all the SVC nodes. An SVC node has two HBAs and each HBA has two fiber ports; the Master Console has two fiber ports. A dual fabric configuration should be zoned with two zones so that each Master Console port is zoned to one port on each HBA on every SVC cluster node as shown below. You should create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC internode communication.
In Storwize systems, internode communication primarily take place through the internal PCI connectivity between the two canisters of a control enclosure. However for the clustered Storwize systems, the internode communication requirements are very similar to the SAN Volume Controller ones.To establish efficient, redundant, and resilient intracluster communication, the intracluster zone must contain at least two ports from each node/canister. For SVC nodes with eight ports or more, generally isolate the intracluster traffic by dedicating node ports specifically to internode communication.
When configuring zones for communication between nodes in the same system, the minimum configuration requires that all Fibre Channel ports on a node detect at least one Fibre Channel port on each other node in the same system. You cannot reduce the configuration in this environment.
Note that the SVC zone does not connect to any hosts or storage.
SVC rules say that an SVC node must be able to see the exact same HBA ports and Storage Subsystem ports as all the other SVC nodes in the cluster. The rules also say that the storage zone must not include any host ports. You can either have one storage zone that includes all the storage devices, or you can define a separate zone for each storage subsystem. The diagram below shows two storage subsystems with their own zones, each with two paths to the SVC for resilience. Switch zones that contain storage system ports must not have more than 40 ports. A configuration that exceeds 40 ports is not supported.
It is critical that you configure storage systems and the SAN so that a system cannot access logical units (LUs) that a host or another system can also access. You can achieve this configuration with storage system logical unit number (LUN) mapping and masking.
If a node can detect a storage system through multiple paths, use zoning to restrict communication to those paths that do not travel over ISLs.
To find the worldwide port names (WWPNs) that are required to set up Fibre Channel zoning with hosts, use the lstargetportfc command. This command also displays the current failover status of host I/O ports.
Hosts and Storage devices usually connect together with more than one fiber channel port. A Host will see several instances of a LUN when this happens. For example, a host with a 2 port HBA that is connecting to a LUN with a single port will see two instances of that LUN. If the Host is connecting through 2 HBAs, each with dual ports, and the storage device has two ports, then the host will see 8 instances of a LUN. Hosts must be able to recognize that these multiple instances are the same LUN, to prevent data corruption and permit path failover. They do this by means of a Subsystem Device Driver (SDD), which combines those multiple instances into a single instance. The 8 instance rule exists to limit the number of paths that must be resolved by the multipathing device driver. More paths do not equate to better performance or higher availability. For optimum performance and availability, limit a host with two Fibre Channel ports to only four paths: one path to each node on each SAN.
A Host will see the VDisks on an SVC as LUNs. Each VDisk is hosted on a single I/O group, and each of the 2 SVC nodes in the I/O group can have 4 ports. As the number of actual paths and instances must be 8 or less you must use no more than 4 Host ports to connect to two SVC ports. Incidentally ISL links between switches will increase the number of potential paths though a SAN, but they do not increase the number of LUN images. It is the combinations of host and SVC port pairs that counts, not the paths through the SAN.
NN_Port ID Virtualization (NPIV) is enabled by default for version 8.2.0 upwards. More layout and zoning requirements are necessary for an NPIV configuration in comparison to Fibre Channel host attachment without NPIV. These requirements follow from the fact that NPIV port failover between nodes must be transparent to hosts. Hence, the set of host ports on which an NPIV port is visible cannot change as a result of a failover.
If you set the NPIV status of a specified I/O group to transitional by entering the CLI command chiogrp -fctargetportmode transitional, you might double the number of paths from the system to a host. To avoid increasing the number of paths substantially, use zoning or other means to temporarily remove some of the paths until such a time as the NPIV status of the I/O group is changed to enabled.
If a replication layer system is using an NPIV-enabled storage layer system for backend storage, the replication layer system must be zoned to the NPIV ports of the storage layer system only.
SDD path failure will always use the preferred node unless ALL the paths to the preferred node have failed, in which case it will use the alternate node. The SVC will automatically fail back to the preferred node when the paths are reinstated
It is common practice that every Host be zoned independently to the SVC, and if you are using a dual fabric SAN, then you should define two zones for every host.