Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 8.1.2 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.1.2.0 and 8.1.2.1. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 10 September 2021.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links

1. New Features

The following new feature has been introduced in the 8.1.2 release:

2. Known Issues and Restrictions

Details Introduced

Upgrading to V8.1.2.1 is currently restricted due to an issue that can occur during the upgrade process.

This is a temporary restriction that will be lifted in a future PTF.

8.1.2.1

Systems making use of HyperSwap should not be upgraded to 8.1.2.

This issue will be resolved in a future release.

8.1.2.0

Enhanced Call Home should not be used on systems running v8.1.1.0 or later.

This is a temporary restriction that will be lifted in a future PTF.

8.1.1.0

All fixes will be applied to MTM 9846/8-AE2 enclosures in the V9000 system.

However, for a MTM 9846/8-AE3, in order to get the same updates please load 1.5.1.0 from Fix Central on the AE3 enclosure. The AE3 will only be updated when firmware is loaded directly on these enclosures.

8.1.1.0

Spectrum Control v5.2.15 is not supported for systems running v8.1.0.2 or later. Spectrum Control v5.2.15.2 is supported.

If a config has previously been added then all subsequent probes will fail, after upgrade to Spectrum Control v5.2.15. This issue can be resolved by upgrading to Spectrum Control v5.2.15.2.

8.1.0.2

When configuring Remote Support Assistance, the connection test will report a fault and opening a connection will report Connected, followed shortly by Connection failed.

Even though it states "Connection Failed", a connection may still be successfully opened.

This issue will be resolved in a future release

8.1.0.1

Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB.

Under some circumstances it may also be necessary to remove and re-add each node in turn.

8.1.0.0

RSA is not supported with IPv6 service IP addresses.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

AIX operating systems will not be able to get full benefit from the hot spare node feature unless they have the dynamic tracking feature enabled (dyntrk).

8.1.0.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0

Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue.

7.4.0.0

If using IP replication, please review the set of restrictions published in the Configuration Limits and Restrictions document for your product.

7.1.0.0

Windows 2008 host paths may become unavailable following a node replacement procedure.

Refer to this flash for more information on this restriction

6.4.0.0

Intra-System Global Mirror not supported.

Refer to this flash for more information on this restriction

6.1.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0.

Refer to this flash for more information

n/a

If an update stalls or fails then contact IBM Support for further assistance

n/a
The following restrictions were valid but have now been lifted

Systems with greater than 128GB RAM should not be upgraded to 8.1.2.

8.1.2.0

A 9846-AE3 expansion enclosure cannot be entered into Spectrum Control. If a 9846-AE3 expansion enclosure is part of a V9000 configuration then less information will be displayed on certain screens.

8.1.0.2

Customers with attached hosts running zLinux should not upgrade to v8.1.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

3. Issues Resolved

This release contains all of the fixes included in the 8.1.1.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2016-10708 ibm10717661 8.1.2.1
CVE-2016-10142 ibm10717931 8.1.2.1
CVE-2017-11176 ibm10717931 8.1.2.1
CVE-2018-1433 ssg1S1012263 8.1.2.1
CVE-2018-1434 ssg1S1012263 8.1.2.1
CVE-2018-1438 ssg1S1012263 8.1.2.1
CVE-2018-1461 ssg1S1012263 8.1.2.1
CVE-2018-1462 ssg1S1012263 8.1.2.1
CVE-2018-1463 ssg1S1012263 8.1.2.1
CVE-2018-1464 ssg1S1012263 8.1.2.1
CVE-2018-1465 ssg1S1012263 8.1.2.1
CVE-2018-1466 ssg1S1012263 8.1.2.1
CVE-2016-6210 ssg1S1012276 8.1.2.1
CVE-2016-6515 ssg1S1012276 8.1.2.1
CVE-2013-4312 ssg1S1012277 8.1.2.1
CVE-2015-8374 ssg1S1012277 8.1.2.1
CVE-2015-8543 ssg1S1012277 8.1.2.1
CVE-2015-8746 ssg1S1012277 8.1.2.1
CVE-2015-8812 ssg1S1012277 8.1.2.1
CVE-2015-8844 ssg1S1012277 8.1.2.1
CVE-2015-8845 ssg1S1012277 8.1.2.1
CVE-2015-8956 ssg1S1012277 8.1.2.1
CVE-2016-2053 ssg1S1012277 8.1.2.1
CVE-2016-2069 ssg1S1012277 8.1.2.1
CVE-2016-2384 ssg1S1012277 8.1.2.1
CVE-2016-2847 ssg1S1012277 8.1.2.1
CVE-2016-3070 ssg1S1012277 8.1.2.1
CVE-2016-3156 ssg1S1012277 8.1.2.1
CVE-2016-3699 ssg1S1012277 8.1.2.1
CVE-2016-4569 ssg1S1012277 8.1.2.1
CVE-2016-4578 ssg1S1012277 8.1.2.1
CVE-2016-4581 ssg1S1012277 8.1.2.1
CVE-2016-4794 ssg1S1012277 8.1.2.1
CVE-2016-5412 ssg1S1012277 8.1.2.1
CVE-2016-5828 ssg1S1012277 8.1.2.1
CVE-2016-5829 ssg1S1012277 8.1.2.1
CVE-2016-6136 ssg1S1012277 8.1.2.1
CVE-2016-6198 ssg1S1012277 8.1.2.1
CVE-2016-6327 ssg1S1012277 8.1.2.1
CVE-2016-6480 ssg1S1012277 8.1.2.1
CVE-2016-6828 ssg1S1012277 8.1.2.1
CVE-2016-7117 ssg1S1012277 8.1.2.1
CVE-2016-10229 ssg1S1012277 8.1.2.1
CVE-2016-0634 ssg1S1012278 8.1.2.1

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU01792 S1 HIPER (Highly Pervasive): When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas, in the array, it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to the following Flash 8.1.2.1 Distributed RAID
HU01769 S1 Systems with DRAID arrays, with more than 131,072 extents, may experience multiple warmstarts due to a backend SCSI UNMAP issue 8.1.2.1 Distributed RAID
HU01720 S1 HIPER (Highly Pervasive): An issue in the handling of compressed volume shrink operations, in the presence of Easy Tier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group 8.1.2.0 EasyTier, Compression
HU01866 S1 HIPER (Highly Pervasive): A faulty PSU sensor, in an AC3 node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group 8.1.2.0 System Monitoring
FLASH-24631 S1 Flash module failure causes system power loss 8.1.2.0 Reliability Availability Serviceability
FLASH-25053 S1 Flash Module rebuild can run out of tracking structures 8.1.2.0 Reliability Availability Serviceability
FLASH-25164 S1 Flash Module rebuild allocation failures can return incorrect status 8.1.2.0 Reliability Availability Serviceability
FLASH-25875 S1 Removing the config node may cause a system parity issue requiring a fast certify. If the certify times out the interface adapters in the partner node will fail 8.1.2.0 System Update
FLASH-26091 S1 Removing the config node during a hardware upgrade may stall all I/O 8.1.2.0 System Update
FLASH-26092 S1 An issue during an upgrade may lead to data corruption 8.1.2.0 System Update
FLASH-26093 S1 Upgrades of FlashSystem 840/900 behind 8.1.x virtualized products can result in outage 8.1.2.0 Reliability Availability Serviceability
HU01718 S1 Hung I/O due to issues on the inter-site links can lead to multiple node warmstarts 8.1.2.0 Global Mirror
HU01723 S1 A timing window issue around nodes leaving and re-joining clusters can lead to hung I/O and node warmstarts 8.1.2.0 Reliability Availability Serviceability
HU01735 S1 Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes 8.1.2.0 RAID
HU01767 S1 Reads of 4K/8K from an array can under exceptional circumstances return invalid data 8.1.2.0 RAID, Thin Provisioning
HU01966 S1 Quorum disks cannot be initialised if a node is offline or removed from the cluster. If quorum disks fail, this can lead to loss of access to data, until the node is re-added. 8.1.2.0 Quorum
FLASH-24121 S2 System checksum errors on RAID Reads may lead to drive failures. 8.1.2.0 RAID
HU01771 S2 An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline 8.1.2.0 System Monitoring
IT23747 S2 For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance 8.1.2.0 Distributed RAID
FLASH-22369 S3 rmvdisk can unexpectedly fail causing a warmstart. 8.1.2.0 Command Line Interface
FLASH-23991 S3 Adapter FRU replacement during boot upgrade of Lancer can cause a warmstart. 8.1.2.0 Reliability Availability Serviceability
FLASH-24742 S3 Call Home is sending stale data. 8.1.2.0 System Monitoring
FLASH-25458 S3 On rebuild timeout, interfaces not reset recovered. 8.1.2.0
HU01494 S3 A change to the FC port mask may fail even though connectivity would be sufficient 8.1.2.0 Command Line Interface
HU01619 S3 A misreading of the PSU register can lead to failure events being logged incorrectly 8.1.2.0 System Monitoring
HU01664 S3 When a node first restarts during an upgrade a rare timing window issue may result in a warmstart causing the upgrade to fail 8.1.2.0 System Update
HU01715 S3 Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart 8.1.2.0 HyperSwap
HU01725 S3 Snap collection audit log selection filter can incorrectly skip some of the latest logs 8.1.2.0 System Monitoring
HU01740 S3 The timeout setting for key server commands may be too brief when the server is busy causing those commands to fail 8.1.2.0 Encryption
HU01750 S3 An issue in heartbeat handling between nodes can cause a node warmstart 8.1.2.0 Reliability Availability Serviceability
FLASH-7931 S4 Include system name in Call Home heartbeats. 8.1.2.0 System Monitoring

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.

5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide