Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 8.1 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.1.0.0 and 8.1.0.2. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 10 September 2021.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links

1. New Features

The following new features have been introduced in the 8.1 release: The following new feature has been introduced in the 8.1.0.2 release:

2. Known Issues and Restrictions

Details Introduced

FlashSystem 840 systems running with an array created on firmware prior to v1.2.x.x do not support SCSI UNMAP or WRITE SAME with Unmap commands. Support for these commands was recently added in v8.1.0.2. However this PTF does not correctly identify 840 arrays created on these earlier firmware versions. Customers, with FlashSystem 840 backends, should not upgrade their V9000 systems to v8.1.0.2 until the proper checks are complete.

The issue can be avoided by disabling unmap and ask IBM Remote Technical Support for an action plan to make new arrays that support unmap.

8.1.0.2

HyperSwap should not be used on systems with 9846-AC2 or AC3 control enclosures and 9846-AE3 expansion enclosures.

8.1.0.2

Spectrum Control v5.2.15 is not supported for systems running v8.1.0.2 or later. Spectrum Control v5.2.15.2 is supported.

If a config has previously been added then all subsequent probes will fail, after upgrade to Spectrum Control v5.2.15. This issue can be resolved by upgrading to Spectrum Control v5.2.15.2.

8.1.0.2

When configuring Remote Support Assistance, the connection test will report a fault and opening a connection will report Connected, followed shortly by Connection failed.

Even though it states "Connection Failed", a connection may still be successfully opened.

This issue will be resolved in a future release

8.1.0.1

Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB.

Under some circumstances it may also be necessary to remove and re-add each node in turn.

8.1.0.0

RSA is not supported with IPv6 service IP addresses.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

Customers with attached hosts running zLinux should not upgrade to v8.1.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

AIX operating systems will not be able to get full benefit from the hot spare node feature unless they have the dynamic tracking feature enabled (dyntrk).

8.1.0.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0

Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue.

7.4.0.0
If using IP replication, please review the set of restrictions published in the Configuration Limits and Restrictions document for your product. 7.1.0.0

Windows 2008 host paths may become unavailable following a node replacement procedure.

Refer to this flash for more information on this restriction

6.4.0.0

Intra-System Global Mirror is not supported.

Refer to this flash for more information on this restriction

6.1.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a

If an update stalls or fails then contact IBM Support for further assistance.

n/a
The following restrictions were valid but have now been lifted

A 9846-AE3 expansion enclosure cannot be entered into Spectrum Control. If a 9846-AE3 expansion enclosure is part of a V9000 configuration then less information will be displayed on certain screens.

8.1.0.2

The V9000 Hot Spare Node feature should not be used.

8.1.0.0

Systems with expansion enclosures should not be upgraded to v8.1.

8.1.0.0

Systems using HyperSwap should not be upgraded to v8.1.

8.1.0.0

Systems with VMware ESXi (all versions) hosts attached using FCoE should not be upgraded to v8.1.

8.1.0.0

Customers with attached hosts running AIX 6.x or Solaris (with DMP) should not upgrade to v8.1.

8.1.0.0

Systems using USB/Keyserver Co-existence should not be upgraded to v8.1.

8.1.0.0

Systems using Keyserver Encryption with IPv6 should not be upgraded to v8.1.

8.1.0.0

Systems with direct attached hosts should not be upgraded to v8.1.

8.1.0.0

Systems using Transparent Cloud Tiering should not be upgraded to v8.1.

8.1.0.0

3. Issues Resolved

This release contains all of the fixes included in the 7.8.1.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2017-1710 ssg1S1010788 8.1.0.1
CVE-2016-0634 ssg1S1012278 8.1.0.0

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU01706 S1 HIPER (Highly Pervasive): Areas of volumes written with all-zero data may contain non-zero data. For more details refer to the following Flash. 8.1.0.2
HU01665 S1 HIPER (Highly Pervasive): In environments with slow MDisk groups the creation of a new filesystem, with default settings, on a Linux host under conditions of parallel workloads can overwhelm the capabilities of the backend storage MDisk group and lead to warmstarts due to hung I/O on multiple nodes. 8.1.0.1 Hosts
HU01670 S1 HIPER (Highly Pervasive): Enabling RSA without a valid service IP address may cause multiple node warmstarts. 8.1.0.1 RSA
IT23034 S1 With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery. 8.1.0.1 HyperSwap
HU01700 S2 If a thin-provisioned or compressed volume is deleted, and another volume is immediately created with the same real capacity, warmstarts may occur. 8.1.0.1 Compression, Thin Provisioning
HU01673 S3 GUI rejects passwords that include special characters. 8.1.0.1 Graphical User Interface
FLASH-22972, 23088, 23256, 23261, 23660, 24216, 22700 S1 HIPER (Highly Pervasive): Multiple enhancements made for Configuration Check Error (CCE) detection and handling. 8.1.0.0 Reliability Availability Serviceability
HU01239 & HU01255 & HU01586 S1 HIPER (Highly Pervasive): The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access. 8.1.0.0 Reliability Availability Serviceability
HU01646 S1 HIPER (Highly Pervasive): A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster. 8.1.0.0 Reliability Availability Serviceability
HU01940 S1 HIPER (Highly Pervasive): Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low. 8.1.0.0 Drives
FLASH-23211 S1 Staggered battery end of life is needed to ensure that both system batteries will not reach end of life simultaneously. 8.1.0.0 Reliability Availability Serviceability
HU01321 S1 Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring. 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01490 S1 When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups. 8.1.0.0 iSCSI
HU01509 S1 Where a drive is generating medium errors, an issue in the handling of array rebuilds can result in an MDisk group being repeated taken offline. 8.1.0.0 RAID
HU01524 S1 When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up. 8.1.0.0 RAID
HU01549 S1 During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes. 8.1.0.0 iSCSI, System Update
HU01572 S1 SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access. 8.1.0.0 iSCSI
HU01583 S1 Running mkhostcluster with duplicate host names or IDs in the seedfromhost argument will cause a Tier 2 recovery. 8.1.0.0 Host Cluster
IC57642 S1 A complex combination of failure conditions, in the fabric connecting nodes, can result in lease expiries, possibly cluster-wide 8.1.0.0 Reliability Availability Serviceability
IT17919 S1 A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts. 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
FLASH-22356 S2 Validation should be performed on rebuild/xverify to avoid out of bound addresses. 8.1.0.0 RAID
FLASH-22664 S2 Improve interface adapter failure recovery mechanism to prevent failing the RAID controller. 8.1.0.0 RAID
FLASH-22901 S2 Certify failures should not cause both RAID controllers to be failed repeatedly. 8.1.0.0 RAID
FLASH-22939 S2 On the unlikely occasion that system node canisters are placed into service state and rebooted, there may be a corrupt array and dual canister failures. 8.1.0.0 Reliability Availability Serviceability
FLASH-22947 S2 Data sector check defeated by a corrupted field, which can lead to outages. 8.1.0.0 Reliability Availability Serviceability
HU01476 S2 A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed. 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01481 S2 A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts. 8.1.0.0 HyperSwap
HU01506 S2 Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts. 8.1.0.0 Volume Mirroring
HU01569 S2 When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes. 8.1.0.0 Compression
HU01579 S2 In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive. 8.1.0.0 Quorum, Drives
HU01584 S2 An issue in array indexing can cause a RAID array to go offline repeatedly. 8.1.0.0 RAID
HU01610 S2 The handling of the background copy backlog by FlashCopy can cause latency for other unrelated FlashCopy maps. 8.1.0.0 FlashCopy
HU01614 S2 After a node is upgraded hosts defined as TPGS may have paths set to inactive. 8.1.0.0 Hosts
HU01623 S2 An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships. 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01626 S2 Node downgrade from v7.8.x to v7.7.1 or earlier (for example during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue. 8.1.0.0 System Update
HU01630 S2 When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts. 8.1.0.0 FlashCopy
HU01697 S2 A timeout issue in RAID member management can lead to multiple node warmstarts 8.1.0.0 RAID
FLASH-22737 S3 Increase efficiency in low level data mapping to reduce I/O response times in certain pathological workloads. 8.1.0.0 Performance
FLASH-22938 S3 Array may move into the offline state after the active canister is removed and re-inserted into the enclosure. 8.1.0.0 Reliability Availability Serviceability
FLASH-23256 S3 After upgrading to release 1.4.7.0, one of the Configuration Check Error (CCE) detection and handling engines on the flash module is not automatically started. 8.1.0.0 System Update
FLASH-23316 S3 Shutting the system down with the stopsystem -force command could cause a warmstart if any flash modules are in the failed state or a RAID controller is in the service state. 8.1.0.0
FLASH-23484 S3 (HIPER) Prevent rare case of flash module sector errors causing a system outage. 8.1.0.0 Reliability Availability Serviceability
FLASH-23578 S3 Recovery backup only usable on the canister node that the command was issued from. 8.1.0.0
FLASH-23580 S3 Prevent timed out rebuild tasks from causing interfaces to be failed erroneously. 8.1.0.0 RAID
FLASH-23894 S3 RAID controller failure could cause an warmstart. 8.1.0.0 RAID
HU01467 S3 Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools. 8.1.0.0 System Monitoring
HU01385 S3 A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy. 8.1.0.0 HyperSwap
HU01396 S3 HBA firmware resources can become exhausted resulting in node warmstarts. 8.1.0.0 Hosts
HU01446 S3 Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart. 8.1.0.0 Hosts
HU01454 S3 During an array rebuild a quiesce operation can become stalled leading to a node warmstart. 8.1.0.0 RAID, Distributed RAID
HU01458 S3 A node warmstart may occur when hosts submit writes to Remote Copy secondary volumes (which are in a read-only mode). 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01472 S3 A locking issue in Global Mirror can cause a warmstart on the secondary cluster. 8.1.0.0 Global Mirror
HU01521 S3 Remote Copy does not correctly handle STOP commands for relationships which may lead to node warmstarts. 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01522 S3 A node warmstart may occur when a Fibre Channel frame is received with an unexpected value for host login type. 8.1.0.0 Hosts
HU01545 S3 A locking issue in the stats collection process may result in a node warmstart. 8.1.0.0 System Monitoring
HU01550 S3 Removing a volume with -force while it is still receiving I/O from a host may lead to a node warmstart. 8.1.0.0
HU01554 S3 Node warmstart may occur during a livedump collection. 8.1.0.0 Support Data Collection
HU01556 S3 The handling of memory pool usage by Remote Copy may lead to a node warmstart. 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01563 S3 Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart. 8.1.0.0
HU01566 S3 After upgrading to v7.8.0 or later, numerous 1370 errors are seen in the Event Log. 8.1.0.0
HU01573 S3 Node warmstart due to a stats collection scheduling issue. 8.1.0.0 System Monitoring
HU01582 S3 A compression issue in IP replication can result in a node warmstart. 8.1.0.0 IP Replication
HU01615 S3 A timing issue relating to process communication can result in a node warmstart. 8.1.0.0
HU01622 S3 If a Dense Draw enclosure is put into maintenance mode during an upgrade of the enclosure management firmware then further upgrades to adjacent enclosures will be prevented. 8.1.0.0 System Update
HU01631 S3 A memory leak in Easy Tier when pools are in Balanced mode can lead to node warmstarts. 8.1.0.0 EasyTier
HU01653 S3 An automatic Tier 3 recovery process may fail due to a RAID indexing issue. 8.1.0.0 Reliability Availability Serviceability
HU01679 S3 An issue in the RAID component can, very occasionally, cause a single node warmstart 8.1.0.0 RAID
HU01704 S3 In systems using HyperSwap a rare timing window issue can result in a node warmstart 8.1.0.0 HyperSwap
HU01729 S3 Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart 8.1.0.0 Metro Mirror, Global Mirror, Global Mirror With Change Volumes

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.

5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide