Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 8.1.2 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.1.2.0 and 8.1.2.1. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 10 September 2021.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new feature has been introduced in the 8.1.2 release:

2. Known Issues and Restrictions

Note: For clarity, the term "node" will be used to refer to a SAN Volume Controller node or Storwize system node canister.
Details Introduced

During upgrade node failover does not bring up the normal alert message requiring a refresh of the GUI. Customers will need to manually refresh the GUI upon upgrade to v8.1.2.1.

This is a temporary restriction that will be lifted in a future PTF.

8.1.2.1

Systems making use of HyperSwap should not be upgraded to 8.1.2.

This issue will be resolved in a future release.

8.1.2.0

Customers using Spectrum Virtualize as Software clusters must ensure that all spare hardware exactly matches active hardware before installing v8.1.2.

This issue will be resolved in a future release.

8.1.2.0

Customers wishing to run RACE compression (including data reduction compression) on Spectrum Virtualize as Software clusters will need to install a 2nd CPU for both nodes within the I/O group.

This is a restriction that may be lifted in a future PTF.

8.1.2.0

Spectrum Virtualize for Public Cloud v8.1.2 is not available.

8.1.2.0

Customers with FlashSystem V840 systems with Flash code v1.1 on the backend enclosure should not upgrade to v8.1.1.1 or later.

This is a temporary restriction that will be lifted in a future PTF.

8.1.1.1

Systems running v8.1.0 or earlier, with more than 1000 volumes, cannot be upgraded to 8.1.1.0 or later.

This is a temporary restriction that will be lifted, by APAR HU01804, in a future PTF. In the interim IBM Support can provide an ifix to allow upgrade.

8.1.1.0

Enhanced Call Home should not be used on systems running v8.1.1.0 or later.

This is a temporary restriction that will be lifted in a future PTF.

8.1.1.0

When configuring Remote Support Assistance, the connection test will report a fault and opening a connection will report Connected, followed shortly by Connection failed.

Even though it states "Connection Failed", a connection may still be successfully opened.

This issue will be resolved in a future release.

8.1.0.1

Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB.

Under some circumstances it may also be necessary to remove and re-add each node in turn.

8.1.0.0

RSA is not supported with IPv6 service IP addresses.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

AIX operating systems will not be able to get full benefit from the hot spare node feature unless they have the dynamic tracking feature enabled (dyntrk).

8.1.0.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Spectrum Virtualize as Software customers should not enable the Transparent Cloud Tiering function.

This restriction will be removed under APAR HU01495.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0

Priority Flow Control for iSCSI is only supported on Brocade VDX 10GbE switches.

7.7.0.0

It is not possible to replace the mid-plane in a SVC 12F SAS expansion enclosure.

If a SVC 12F mid-plane must be replaced then a new enclosure will be provided.

7.7.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

Systems with greater than 128GB RAM should not be upgraded to 8.1.2.

This was a temporary restriction that has been lifted.

8.1.2.0

Customers with Storwize V7000 Gen 2 Model 500 systems should not upgrade to v8.1.1.0 or later.

This issue has been resolved in PTF v8.1.2.1.

8.1.1.0

Customers with attached hosts running zLinux should not upgrade to v8.1.

This was a temporary restriction that has been lifted.

8.1.0.0

3. Issues Resolved

This release contains all of the fixes included in the 8.1.1.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2016-10708 ibm10717661 8.1.2.1
CVE-2016-10142 ibm10717931 8.1.2.1
CVE-2017-11176 ibm10717931 8.1.2.1
CVE-2018-1433 ssg1S1012263 8.1.2.1
CVE-2018-1434 ssg1S1012263 8.1.2.1
CVE-2018-1438 ssg1S1012263 8.1.2.1
CVE-2018-1461 ssg1S1012263 8.1.2.1
CVE-2018-1462 ssg1S1012263 8.1.2.1
CVE-2018-1463 ssg1S1012263 8.1.2.1
CVE-2018-1464 ssg1S1012263 8.1.2.1
CVE-2018-1465 ssg1S1012263 8.1.2.1
CVE-2018-1466 ssg1S1012263 8.1.2.1
CVE-2016-6210 ssg1S1012276 8.1.2.1
CVE-2016-6515 ssg1S1012276 8.1.2.1
CVE-2013-4312 ssg1S1012277 8.1.2.1
CVE-2015-8374 ssg1S1012277 8.1.2.1
CVE-2015-8543 ssg1S1012277 8.1.2.1
CVE-2015-8746 ssg1S1012277 8.1.2.1
CVE-2015-8812 ssg1S1012277 8.1.2.1
CVE-2015-8844 ssg1S1012277 8.1.2.1
CVE-2015-8845 ssg1S1012277 8.1.2.1
CVE-2015-8956 ssg1S1012277 8.1.2.1
CVE-2016-2053 ssg1S1012277 8.1.2.1
CVE-2016-2069 ssg1S1012277 8.1.2.1
CVE-2016-2384 ssg1S1012277 8.1.2.1
CVE-2016-2847 ssg1S1012277 8.1.2.1
CVE-2016-3070 ssg1S1012277 8.1.2.1
CVE-2016-3156 ssg1S1012277 8.1.2.1
CVE-2016-3699 ssg1S1012277 8.1.2.1
CVE-2016-4569 ssg1S1012277 8.1.2.1
CVE-2016-4578 ssg1S1012277 8.1.2.1
CVE-2016-4581 ssg1S1012277 8.1.2.1
CVE-2016-4794 ssg1S1012277 8.1.2.1
CVE-2016-5412 ssg1S1012277 8.1.2.1
CVE-2016-5828 ssg1S1012277 8.1.2.1
CVE-2016-5829 ssg1S1012277 8.1.2.1
CVE-2016-6136 ssg1S1012277 8.1.2.1
CVE-2016-6198 ssg1S1012277 8.1.2.1
CVE-2016-6327 ssg1S1012277 8.1.2.1
CVE-2016-6480 ssg1S1012277 8.1.2.1
CVE-2016-6828 ssg1S1012277 8.1.2.1
CVE-2016-7117 ssg1S1012277 8.1.2.1
CVE-2016-10229 ssg1S1012277 8.1.2.1
CVE-2016-0634 ssg1S1012278 8.1.2.1

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
HU01792 All HIPER When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash (show details)
Symptom Loss of Access to Data
Environment Systems using DRAID
Trigger None
Workaround None
8.1.2.1 Distributed RAID
HU01769 All Critical Systems with DRAID arrays, with more than 131,072 extents, may experience multiple warmstarts due to a backend SCSI UNMAP issue (show details)
Symptom Loss of Access to Data
Environment Systems running v8.1.1 or later
Trigger Create a DRAID array with >131,072 extents on SSDs
Workaround Disable UNMAP at a system level by issuing a "svctask chsystem -unmap off" command
8.1.2.1 Distributed RAID
HU01720 All HIPER An issue in the handling of compressed volume shrink operations, in the presence of EasyTier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group (show details)
Symptom Loss of Access to Data
Environment Systems running v8.1 or later using EasyTier with compressed volumes
Trigger None
Workaround None
8.1.2.0 Compression, EasyTier
HU01866 SVC HIPER A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group (show details)
Symptom Loss of Access to Data
Environment SVC systems using SV1 model nodes
Trigger None
Workaround None
8.1.2.0 System Monitoring
HU01718 SVC, V7000, V5000 Critical Hung I/O due to issues on the inter-site links can lead to multiple node warmstarts (show details)
Symptom Loss of Access to Data
Environment Systems using Global Mirror
Trigger Problems with the inter-site link
Workaround Ensure no issues on inter-site links
8.1.2.0 Global Mirror
HU01723 All Critical A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts (show details)
Symptom Loss of Access to Data
Environment All systems
Trigger None
Workaround None
8.1.2.0 Reliability Availability Serviceability
HU01735 All Critical Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes (show details)
Symptom Offline Volumes
Environment All systems
Trigger Multiple power failures
Workaround None
8.1.2.0 RAID
HU01767 All Critical Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash (show details)
Symptom Loss of Access to Data
Environment Systems running v7.8.0 or earlier
Trigger None
Workaround None
8.1.2.0 RAID, Thin Provisioning
HU01771 SVC, V7000 High Importance An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline (show details)
Symptom Loss of Redundancy
Environment SVC & V7000 systems running v7.8 or later
Trigger Node CMOS battery issue
Workaround None
8.1.2.0 System Monitoring
HU01494 All Suggested A change to the FC port mask may fail even though connectivity would be sufficient (show details)
Symptom None
Environment Systems running v7.7.1 or later using FC connectivity
Trigger svctask chsystem -localfcportmask
Workaround None
8.1.2.0 Command Line Interface
HU01619 All Suggested A misreading of the PSU register can lead to failure events being logged incorrectly (show details)
Symptom None
Environment Systems running v7.6 or later
Trigger None
Workaround None
8.1.2.0 System Monitoring
HU01664 All Suggested A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade (show details)
Symptom Single Node Warmstart
Environment Systems running v7.8 or later
Trigger None
Workaround None
8.1.2.0 System Update
HU01715 All Suggested Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using HyperSwap
Trigger A rmvolumecopy command followed by an expandvdisksize command
Workaround Stop I/O to the volume approx 10 mins prior to expanding it
8.1.2.0 HyperSwap
HU01725 All Suggested Snap collection audit log selection filter can, incorrectly, skip some of the latest logs (show details)
Symptom None
Environment Systems running v7.7.1 or later
Trigger None
Workaround Manually collect the audit logs using GUI support page
8.1.2.0 System Monitoring
HU01727 V7000, V5000 Suggested Due to a memory accounting issue an out of range access attempt will cause a node warmstart (show details)
Symptom Single Node Warmstart
Environment Storwize systems
Trigger None
Workaround None
8.1.2.0
HU01740 All Suggested The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail (show details)
Symptom None
Environment Systems running v7.8 or later using encryption
Trigger Enter mkkeyserver command
Workaround Retry command
8.1.2.0 Encryption
HU01750 All Suggested An issue in heartbeat handling between nodes can cause a node warmstart (show details)
Symptom Single Node Warmstart
Environment All systems
Trigger None
Workaround None
8.1.2.0 Reliability Availability Serviceability
HU01756 V7000 Suggested A scheduling issue may cause a config node warmstart (show details)
Symptom Single Node Warmstart
Environment Storwize V7000 Gen 2 systems running v7.8 or later
Trigger None
Workaround None
8.1.2.0

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Supported Drive Types and Firmware Levels
SAN Volume Controller and Storwize Family Inter-cluster Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning