Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 8.1.1 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.1.1.0 and 8.1.1.2. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 10 September 2021.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links

1. New Features

The following new features have been introduced in the 8.1.1 release: The following new feature has been introduced in the 8.1.1.1 release:

2. Known Issues and Restrictions

Details Introduced

Upgrading to V8.1.1.2 is currently restricted due to an issue that can occur during the upgrade process.

This is a temporary restriction that will be lifted in a future PTF.

8.1.1.2

All fixes will be applied to MTM 9846/8-AE2 enclosures in the V9000 system.

However, for a MTM 9846/8-AE3, in order to get the same updates please load 1.5.0.1 from Fix Central on the AE3 enclosure. The AE3 will only be updated when firmware is loaded directly on these enclosures.

8.1.1.0

Spectrum Control v5.2.15 is not supported for systems running v8.1.0.2 or later. Spectrum Control v5.2.15.2 is supported.

If a config has previously been added then all subsequent probes will fail, after upgrade to Spectrum Control v5.2.15. This issue can be resolved by upgrading to Spectrum Control v5.2.15.2.

8.1.0.2

When configuring Remote Support Assistance, the connection test will report a fault and opening a connection will report Connected, followed shortly by Connection failed.

Even though it states "Connection Failed", a connection may still be successfully opened.

This issue will be resolved in a future release

8.1.0.1

Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB.

Under some circumstances it may also be necessary to remove and re-add each node in turn.

8.1.0.0

RSA is not supported with IPv6 service IP addresses.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

AIX operating systems will not be able to get full benefit from the hot spare node feature unless they have the dynamic tracking feature enabled (dyntrk).

8.1.0.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0
Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue. 7.4.0.0
If using IP replication, please review the set of restrictions published in the Configuration Limits and Restrictions document for your product. 7.1.0.0
Windows 2008 host paths may become unavailable following a node replacement procedure

Refer to this flash for more information on this restriction

6.4.0.0
Intra-System Global Mirror not supported

Refer to this flash for more information on this restriction

6.1.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

A 9846-AE3 expansion enclosure cannot be entered into Spectrum Control. If a 9846-AE3 expansion enclosure is part of a V9000 configuration then less information will be displayed on certain screens.

8.1.0.2
FlashSystem 840 systems running with an array created on firmware prior to v1.2.x.x do not support SCSI UNMAP or WRITE SAME with Unmap commands. Support for these commands was recently added in v8.1.0.2. However this PTF does not correctly identify 840 arrays created on these earlier firmware versions. Customers, with FlashSystem 840 backends, should not upgrade their V9000 systems to v8.1.1.0 until the proper checks are complete.

The issue can be avoided by disabling unmap and ask IBM Remote Technical Support for an action plan to make new arrays that support unmap.

8.1.0.2
HyperSwap should not be used on systems with 9846-AC2 or AC3 control enclosures and 9846-AE3 expansion enclosures. 8.1.0.2

Customers with attached hosts running zLinux should not upgrade to v8.1.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

3. Issues Resolved

This release contains all of the fixes included in the 8.1.0.2 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2016-10708 ibm10717661 8.1.1.2
CVE-2016-10142 ibm10717931 8.1.1.2
CVE-2017-11176 ibm10717931 8.1.1.2
CVE-2018-1433 ssg1S1012263 8.1.1.2
CVE-2018-1434 ssg1S1012263 8.1.1.2
CVE-2018-1438 ssg1S1012263 8.1.1.2
CVE-2018-1461 ssg1S1012263 8.1.1.2
CVE-2018-1462 ssg1S1012263 8.1.1.2
CVE-2018-1463 ssg1S1012263 8.1.1.2
CVE-2018-1464 ssg1S1012263 8.1.1.2
CVE-2018-1465 ssg1S1012263 8.1.1.2
CVE-2018-1466 ssg1S1012263 8.1.1.2
CVE-2016-6210 ssg1S1012276 8.1.1.2
CVE-2016-6515 ssg1S1012276 8.1.1.2
CVE-2013-4312 ssg1S1012277 8.1.1.2
CVE-2015-8374 ssg1S1012277 8.1.1.2
CVE-2015-8543 ssg1S1012277 8.1.1.2
CVE-2015-8746 ssg1S1012277 8.1.1.2
CVE-2015-8812 ssg1S1012277 8.1.1.2
CVE-2015-8844 ssg1S1012277 8.1.1.2
CVE-2015-8845 ssg1S1012277 8.1.1.2
CVE-2015-8956 ssg1S1012277 8.1.1.2
CVE-2016-2053 ssg1S1012277 8.1.1.2
CVE-2016-2069 ssg1S1012277 8.1.1.2
CVE-2016-2384 ssg1S1012277 8.1.1.2
CVE-2016-2847 ssg1S1012277 8.1.1.2
CVE-2016-3070 ssg1S1012277 8.1.1.2
CVE-2016-3156 ssg1S1012277 8.1.1.2
CVE-2016-3699 ssg1S1012277 8.1.1.2
CVE-2016-4569 ssg1S1012277 8.1.1.2
CVE-2016-4578 ssg1S1012277 8.1.1.2
CVE-2016-4581 ssg1S1012277 8.1.1.2
CVE-2016-4794 ssg1S1012277 8.1.1.2
CVE-2016-5412 ssg1S1012277 8.1.1.2
CVE-2016-5828 ssg1S1012277 8.1.1.2
CVE-2016-5829 ssg1S1012277 8.1.1.2
CVE-2016-6136 ssg1S1012277 8.1.1.2
CVE-2016-6198 ssg1S1012277 8.1.1.2
CVE-2016-6327 ssg1S1012277 8.1.1.2
CVE-2016-6480 ssg1S1012277 8.1.1.2
CVE-2016-6828 ssg1S1012277 8.1.1.2
CVE-2016-7117 ssg1S1012277 8.1.1.2
CVE-2016-10229 ssg1S1012277 8.1.1.2
CVE-2016-0634 ssg1S1012278 8.1.1.2

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU01720 S1 HIPER (Highly Pervasive): An issue in the handling of compressed volume shrink operations, in the presence of Easy Tier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group 8.1.1.2 EasyTier, Compression
HU01792 S1 HIPER (Highly Pervasive): When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas, in the array, it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to the following Flash 8.1.1.2 Distributed RAID
HU01767 S1 Reads of 4K/8K from an array can under exceptional circumstances return invalid data 8.1.1.2 RAID, Thin Provisioning
HU01769 S1 A SCSI UNMAP issue can cause repeated node warmstarts 8.1.1.2 Distributed RAID
HU01771 S2 An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline 8.1.1.2 System Monitoring
HU01619 S3 A misreading of the PSU register can lead to failure events being logged incorrectly 8.1.1.2 System Monitoring
HU01664 S3 A timing window issue, during an upgrade, can cause the node restarting to warmstart, stalling the upgrade 8.1.1.2 System Update
HU01740 S3 The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail 8.1.1.2 Encryption
HU00247 S1 A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99% 8.1.1.1 RAID, Distributed RAID
IT19192 S1 An issue in the handling of GUI certificates may cause warmstarts leading to a T2 recovery 8.1.1.1 Graphical User Interface, Reliability Availability Serviceability
IT23747 S2 For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance 8.1.1.1 Distributed RAID
HU01655 S3 The algorithm used to calculate an SSD"s replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported 8.1.1.1 Drives
HU01730 S3 When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter 8.1.1.1 GUI Fix Procedure
FLASH-25772 S4 In rare cases, a PSU fan can become stuck at high speed unnecessarily. 8.1.1.1
HU01726 S1 HIPER (Highly Pervasive): A slow raid member drive, in an MDisk, may cause node warmstarts and the MDisk to go offline for a short time 8.1.1.0 Distributed RAID
HU01618 S1 When using the charraymember CLI command, if a member id is entered that is greater than the maximum number of members in a TRAID array, then a T2 recovery will be initiated 8.1.1.0 RAID
HU01620 S1 Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a T2 may occur 8.1.1.0 Transparent Cloud Tiering
HU01671 S1 Metadata between two nodes in an I/O group can become out of step leaving one node unaware of work scheduled on its partner. This can lead to stuck array synchronisation and false 1691 events 8.1.1.0 RAID
HU01678 S1 Entering an invalid parameter in the addvdiskaccess command may initiate a T2 recovery 8.1.1.0 Command Line Interface
HU01701 S1 Following loss of all logins to an external controller, that is providing quorum, when the controller next logs in it will not be automatically used for quorum 8.1.1.0 HyperSwap
HU01420 S2 An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive 8.1.1.0 Distributed RAID
HU01525 S2 During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable 8.1.1.0 Compression, System Update
HU01632 S2 A congested fabric causes the Fibre Channel adapter firmware to abort I/O resulting in node warmstarts 8.1.1.0 Reliability Availability Serviceability
HU01190 S3 Where a controller, which has been assigned to a specific site, has some logins intentionally removed, then the system can continue to display the controller as degraded, even when the DMP has been followed and errors fixed 8.1.1.0 Backend Storage
HU01512 S3 During a DRAID MDisk copy-back operation a miscalculation of the remaining work may cause a node warmstart 8.1.1.0 Distributed RAID
HU01602 S3 When security scanners send garbage data to SVC/Storwize iSCSI target addresses a node warmstart may occur 8.1.1.0 iSCSI
HU01633 S3 Even though synchronisation has completed a RAID array may still show progress to be at 99% 8.1.1.0 RAID
HU01654 S3 There may be a node warmstart when a switch of direction, in a HyperSwap relationship, fails to complete properly 8.1.1.0 HyperSwap
HU01659 S3 Power supply LED can be seen to flash in the absence of an error condition 8.1.1.0 System Monitoring
HU01747 S3 The incorrect detection of a cache issue can lead to a node warmstart 8.1.1.0 Cache
IT20586 S3 Due to an issue in Lancer G5 firmware after a node reboot the LED of the 10GBE port may remain amber even though the port is working normally 8.1.1.0 Reliability Availability Serviceability

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.

5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide