Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 7.8 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.8.0.0 and 7.8.1.16. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 1 November 2022.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 7.8.0 release: The following new features have been introduced in the 7.8.1 release: The following feature has been introduced in the 7.8.1.9 release:

2. Known Issues and Restrictions

Details Introduced

Validation in the Upload Support Package feature will reject the new case number format in the PMR field.

This is a known issue that may be lifted in a future PTF. The fix can be tracked using APAR HU02392.

7.8.1.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0

Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue.

7.4.0.0

If using IP replication, please review the set of restrictions published in the Configuration Limits and Restrictions document for your product.

7.1.0.0

Windows 2008 host paths may become unavailable following a node replacement procedure.

Refer to this flash for more information on this restriction

6.4.0.0

Intra-System Global Mirror not supported.

Refer to this flash for more information on this restriction

6.1.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0.

Refer to this flash for more information

n/a
If an update stalls or fails then Contact IBM Support for Further Assistance n/a
The following restrictions were valid for previous PTFs, but have now been lifted

In the GUI, when filtering volumes by host, if there are more than 50 host objects, then the host list will not include the host names.

This issue has been resolved by APAR HU01687 in PTF v7.8.1.5.

7.8.1.3

No upgrade or service activity should be attempted while a Transparent Cloud Tiering Snapshot task is in progress.

7.8.0.0

The drive limit remains 1056 drives per cluster.

7.8.0.0

3. Issues Resolved

This release contains all of the fixes included in the 7.7.1.3 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs/FLASHs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2018-25032 6622021 7.8.1.16
CVE-2022-0778 6622017 7.8.1.15
CVE-2021-35603 6622019 7.8.1.15
CVE-2021-35550 6622019 7.8.1.15
CVE-2021-29873 6497111 7.8.1.14
CVE-2020-2781 6445063 7.8.1.13
CVE-2020-13935 6445063 7.8.1.13
CVE-2020-14577 6445063 7.8.1.13
CVE-2020-14578 6445063 7.8.1.13
CVE-2020-14579 6445063 7.8.1.13
CVE-2019-5544 6250889 7.8.1.12
CVE-2019-2964 6250887 7.8.1.12
CVE-2019-2989 6250887 7.8.1.12
CVE-2018-12404 6250885 7.8.1.12
CVE-2019-11477 1164286 7.8.1.11
CVE-2019-11478 1164286 7.8.1.11
CVE-2019-11479 1164286 7.8.1.11
CVE-2019-2602 1073958 7.8.1.10
CVE-2018-3180 ibm10884526 7.8.1.10
CVE-2018-12547 ibm10884526 7.8.1.10
CVE-2008-5161 ibm10874368 7.8.1.9
CVE-2018-5391 ibm10872368 7.8.1.8
CVE-2018-5732 ibm10741135 7.8.1.8
CVE-2018-11776 ibm10741137 7.8.1.8
CVE-2017-17449 ibm10872364 7.8.1.8
CVE-2017-18017 ibm10872364 7.8.1.8
CVE-2018-1517 ibm10872456 7.8.1.8
CVE-2018-2783 ibm10872456 7.8.1.8
CVE-2018-12539 ibm10872456 7.8.1.8
CVE-2018-1775 ibm10872486 7.8.1.8
CVE-2017-17833 ibm10872546 7.8.1.8
CVE-2018-11784 ibm10872550 7.8.1.8
CVE-2016-10708 ibm10717661 7.8.1.6
CVE-2016-10142 ibm10717931 7.8.1.6
CVE-2017-11176 ibm10717931 7.8.1.6
CVE-2018-1433 ssg1S1012263 7.8.1.6
CVE-2018-1434 ssg1S1012263 7.8.1.6
CVE-2018-1438 ssg1S1012263 7.8.1.6
CVE-2018-1461 ssg1S1012263 7.8.1.6
CVE-2018-1462 ssg1S1012263 7.8.1.6
CVE-2018-1463 ssg1S1012263 7.8.1.6
CVE-2018-1464 ssg1S1012263 7.8.1.6
CVE-2018-1465 ssg1S1012263 7.8.1.6
CVE-2018-1466 ssg1S1012263 7.8.1.6
CVE-2016-6210 ssg1S1012276 7.8.1.6
CVE-2016-6515 ssg1S1012276 7.8.1.6
CVE-2013-4312 ssg1S1012277 7.8.1.6
CVE-2015-8374 ssg1S1012277 7.8.1.6
CVE-2015-8543 ssg1S1012277 7.8.1.6
CVE-2015-8746 ssg1S1012277 7.8.1.6
CVE-2015-8812 ssg1S1012277 7.8.1.6
CVE-2015-8844 ssg1S1012277 7.8.1.6
CVE-2015-8845 ssg1S1012277 7.8.1.6
CVE-2015-8956 ssg1S1012277 7.8.1.6
CVE-2016-2053 ssg1S1012277 7.8.1.6
CVE-2016-2069 ssg1S1012277 7.8.1.6
CVE-2016-2384 ssg1S1012277 7.8.1.6
CVE-2016-2847 ssg1S1012277 7.8.1.6
CVE-2016-3070 ssg1S1012277 7.8.1.6
CVE-2016-3156 ssg1S1012277 7.8.1.6
CVE-2016-3699 ssg1S1012277 7.8.1.6
CVE-2016-4569 ssg1S1012277 7.8.1.6
CVE-2016-4578 ssg1S1012277 7.8.1.6
CVE-2016-4581 ssg1S1012277 7.8.1.6
CVE-2016-4794 ssg1S1012277 7.8.1.6
CVE-2016-5412 ssg1S1012277 7.8.1.6
CVE-2016-5828 ssg1S1012277 7.8.1.6
CVE-2016-5829 ssg1S1012277 7.8.1.6
CVE-2016-6136 ssg1S1012277 7.8.1.6
CVE-2016-6198 ssg1S1012277 7.8.1.6
CVE-2016-6327 ssg1S1012277 7.8.1.6
CVE-2016-6480 ssg1S1012277 7.8.1.6
CVE-2016-6828 ssg1S1012277 7.8.1.6
CVE-2016-7117 ssg1S1012277 7.8.1.6
CVE-2016-10229 ssg1S1012277 7.8.1.6
CVE-2016-0634 ssg1S1012278 7.8.1.6
CVE-2017-5647 ssg1S1010892 7.8.1.3
CVE-2016-2183 ssg1S1010205 7.8.1.1
CVE-2016-5546 ssg1S1010205 7.8.1.1
CVE-2016-5547 ssg1S1010205 7.8.1.1
CVE-2016-5548 ssg1S1010205 7.8.1.1
CVE-2016-5549 ssg1S1010205 7.8.1.1
CVE-2017-5638 ssg1S1010113 7.8.1.0
CVE-2016-4461 ssg1S1010883 7.8.1.0
CVE-2016-5385 ssg1S1009581 7.8.0.2
CVE-2016-5386 ssg1S1009581 7.8.0.2
CVE-2016-5387 ssg1S1009581 7.8.0.2
CVE-2016-5388 ssg1S1009581 7.8.0.2
CVE-2016-6796 ssg1S1010114 7.8.0.2
CVE-2016-6816 ssg1S1010114 7.8.0.2
CVE-2016-6817 ssg1S1010114 7.8.0.2
CVE-2016-2177 ssg1S1010115 7.8.0.2
CVE-2016-2178 ssg1S1010115 7.8.0.2
CVE-2016-2183 ssg1S1010115 7.8.0.2
CVE-2016-6302 ssg1S1010115 7.8.0.2
CVE-2016-6304 ssg1S1010115 7.8.0.2
CVE-2016-6306 ssg1S1010115 7.8.0.2
CVE-2016-5696 ssg1S1010116 7.8.0.2
CVE-2016-2834 ssg1S1010117 7.8.0.2
CVE-2016-5285 ssg1S1010117 7.8.0.2
CVE-2016-8635 ssg1S1010117 7.8.0.2
CVE-2017-6056 ssg1S1010022 7.8.0.0

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU02342 S1 Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state 7.8.1.15 RAID
HU02406 S1 An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access.  For more details refer to the following Flash  7.8.1.15 Interoperability
HU02471 S1 After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue 7.8.1.15 Global Mirror With Change Volumes, FlashCopy
HU02332 & HU02336 S3 When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart 7.8.1.15 Hosts
HU02429 S1 System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI 7.8.1.14 System Monitoring
HU02277 S1 HIPER (Highly Pervasive): RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss.  For more details refer to the following  Flash 7.8.1.13 RAID
HU02338 S1 HIPER (Highly Pervasive): An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image 7.8.1.13 FlashCopy
HU02222 S1 Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to the following Flash 7.8.1.13 Global Mirror with Change Volumes
HU02201 & HU02221 S2 Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors 7.8.1.13 Drives
HU02238 S1 HIPER (Highly Pervasive): Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations.  For more details refer to the following Flash 7.8.1.12 FlashCopy, Global Mirror, Metro Mirror
HU01970 S1 When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted, with -force, then all nodes may repeatedly warmstart 7.8.1.12 Global Mirror With Change Volumes
HU02197 S1 Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery 7.8.1.12 FlashCopy
IT25367 S1 A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type 7.8.1.12 Drives
HU01832 S3 Creation and distribution of the config file may cause an out-of-memory condition, leading to a node warmstart 7.8.1.12 Reliability Availability Serviceability
HU01868 S3 After deleting an encrypted external MDisk, it is possible for the 'encrypted' status of volumes to change to 'no', even though all remaining MDisks are encrypted 7.8.1.12 Encryption
HU01917 S3 Chrome browser support requires a self-signed certificate to include subject alternate name 7.8.1.12 Graphical User Interface
HU02014 S1 HIPER (Highly Pervasive): After a loss of power, where an AC3 node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue 7.8.1.11 Reliability Availability Serviceability
FLASH-29348 S1 Flash module failure causes loss of access 7.8.1.11 Reliability Availability Serviceability
HU01887 S1 In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts 7.8.1.11 Command Line Interface, Host Cluster
HU02043 S1 Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565 7.8.1.11 Support Data Collection
HU02063 S1 HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB 7.8.1.11 HyperSwap
IT26257 S1 Starting a relationship, when the remote volume is offline, may result in a T2 recovery 7.8.1.11 HyperSwap
FLASH-28948 S2 Active rebuild commands stuck after XBAR failure 7.8.1.11 RAID
HU01836 S2 When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts 7.8.1.11 HyperSwap
HU01923 S2 An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts 7.8.1.11 Global Mirror
HU01944 S2 Proactive host failover not waiting for 25 seconds before allowing nodes to go offline during upgrades or maintenance 7.8.1.11 Reliability Availability Serviceability
HU01952 S2 When the compression accelerator hardware driver detects an uncorrectable error the node will reboot 7.8.1.11 Compression
HU02049 S2 GUI session handling has an issue that can generate many exceptions, adversely impacting GUI performance 7.8.1.11 Graphical User Interface
HU01830 S3 Missing security-enhancing HTTP response headers 7.8.1.11 Security
HU01863 S3 In rare circumstances, a drive replacement may result in a "ghost drive" (i.e. a drive with the same ID as the replaced drive stuck in a permanently offline state) 7.8.1.11 Drives
HU01892 S3 LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported 7.8.1.11 Backend Storage
HU01981 S3 Although an issue in the 16Gb HBA firmware is handled correctly it can still cause a node warmstart 7.8.1.11 Reliability Availability Serviceability
HU01988 S3 In the Monitoring -> 3D view page, the "export to csv" button does not function 7.8.1.11 Graphical User Interface
HU02102 S3 Excessive processing time required for FlashCopy bitmap operations, associated with large (>20TB) Global Mirror change volumes, may lead to a node warmstart 7.8.1.11 Global Mirror With Change Volumes
HU02085 S3 Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios 7.8.1.11 Global Mirror
HU01781 S1 An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes 7.8.1.10
HU01888 & HU01997 S1 An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart 7.8.1.10 FlashCopy
HU01966 S1 Systems are unable to initialise quorum disks while a node is offline. This can lead to loss of access to data 7.8.1.10 Quorum
HU01972 S2 When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts 7.8.1.10 RAID, Distributed RAID
HU00744 S3 Single node warmstart due to an accounting issue within the cache component 7.8.1.10 Cache
HU00921 S3 A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes 7.8.1.10
HU01737 S3 On the "Update System" screen, for "Test Only", if a valid code image is selected, in the "Run Update Test Utility" dialog, then clicking the "Test" button will initiate a system update 7.8.1.10 System Update
HU01915 & IT28654 S3 Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers' SSL certificates are part of a chain of trust 7.8.1.10 Encryption
HU01617 S1 HIPER (Highly Pervasive): Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery 7.8.1.9 FlashCopy
HU01708 S1 HIPER (Highly Pervasive): A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks 7.8.1.9 RAID
HU01865 S1 HIPER (Highly Pervasive): When creating a Hyperswap relationship using addvolumecopy, or similar methods, the system should perform a synchronisation operation to copy the data of the original copy to the new copy. In some cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) 7.8.1.9 HyperSwap
HU01913 S1 HIPER (Highly Pervasive): A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access 7.8.1.9 Distributed RAID
HU01723 S1 A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts 7.8.1.9 Reliability Availability Serviceability
HU01876 S1 Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur 7.8.1.9 Backend Storage
IT27460 S1 Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits 7.8.1.9 Reliability Availability Serviceability
IT29040 S1 Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access 7.8.1.9 RAID, Distributed RAID
FLASH-27909 S2 The system may attempt to progress an upgrade, in the presence of a fault, resulting in a failed upgrade. 7.8.1.9 Hosts
FLASH-27919 S2 A failing HBA may cause a node to warmstart 7.8.1.9 Hosts
HU01907 S2 An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error 7.8.1.9 Reliability Availability Serviceability
FLASH-27861 S3 A SNMP query returns a "Timeout: No Response" message. 7.8.1.9 Hosts
FLASH-27867 S3 Repeated restarting of the "xivagentd" daemon will prevent XIV installation. 7.8.1.9 Hosts
HU01485 S3 When a AC3 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.
Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
7.8.1.9 System Monitoring
HU01659 S3 On AC3 systems the Node Fault LED may be seen to flash in the absence of an error condition.
Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
7.8.1.9 System Monitoring
HU01849 S3 An excessive number of SSH sessions may lead to a node warmstart 7.8.1.9 System Monitoring
IT26049 S3 An issue with CPU scheduling may cause the GUI to respond slowly 7.8.1.9 Graphical User Interface
HU01492 S1 HIPER (Highly Pervasive): All ports of a 16G HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter 7.8.1.8 Reliability Availability Serviceability
HU01726 S1 HIPER (Highly Pervasive): A slow raid member drive in an MDisk may cause node warmstarts and the MDisk to go offline for a short time 7.8.1.8 Distributed RAID
HU01940 S1 HIPER (Highly Pervasive): Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low 7.8.1.8 Drives
HU01572 S1 SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access 7.8.1.8 iSCSI
HU01678 S1 Entering an invalid parameter in the addvdiskaccess command may initiate a Tier 2 recovery 7.8.1.8 Command Line Interface
HU01735 S1 Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes 7.8.1.8 RAID
HU01774 S1 After a failed mkhost command for an iSCSI host any I/O from that host will cause multiple warmstarts 7.8.1.8 iSCSI
HU01799 S1 Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart 7.8.1.8 HyperSwap
HU01825 S1 Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery 7.8.1.8 FlashCopy
HU01847 S1 FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts 7.8.1.8 FlashCopy
HU01899 S1 In a HyperSwap cluster, when the primary I/O group has a dead domain, nodes will repeatedly warmstart 7.8.1.8 HyperSwap
IT25850 S1 I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access 7.8.1.8 Distributed RAID
FLASH-26391 & FLASH-26388 & FLASH-26117 S2 Improve timing to prevent erroneous flash module failures which in rare cases can lead to an outage 7.8.1.8 Reliability Availability Serviceability
FLASH-27417 S2 Improved RAID error handling for unresponsive flash modules to prevent rare data error 7.8.1.8 RAID
HU01507 S2 Until the initial synchronisation process completes high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy 7.8.1.8 Volume Mirroring
HU01579 S2 In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive 7.8.1.8 Quorum, Drives
HU01661 S2 A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synching 7.8.1.8 HyperSwap
HU01733 S2 Canister information, for the High Density Expansion Enclosure, may be incorrectly reported. 7.8.1.8 Reliability Availability Serviceability
HU01797 S2 Hitachi G1500 backend controllers may exhibit higher than expected latency 7.8.1.8 Backend Storage
HU01813 S2 An issue with Global Mirror stream recovery handling at secondary sites can adversely impact replication performance 7.8.1.8 Global Mirror
HU01824 S2 Switching replication direction for HyperSwap relationships can lead to long I/O timeouts 7.8.1.8 HyperSwap
HU01839 S2 Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected 7.8.1.8 Hosts
HU01842 S2 Bursts of I/O to Samsung Read-Intensive Drives can be interpreted as dropped frames against the resident slots leading to redundant drives being incorrectly failed 7.8.1.8 Drives
HU01846 S2 Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state 7.8.1.8 Reliability Availability Serviceability
HU01276 S3 An issue in the handling of debug data from the FC adapter can cause a node warmstart 7.8.1.8 Reliability Availability Serviceability
HU01467 S3 Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools 7.8.1.8 System Monitoring
HU01512 S3 During a DRAID MDisk copy-back operation a miscalculation of the remaining work may cause a node warmstart 7.8.1.8 Distributed RAID
HU01523 S3 An issue with FC adapter initialisation can lead to a node warmstart 7.8.1.8 Reliability Availability Serviceability
HU01556 S3 The handling of memory pool usage by Remote Copy may lead to a node warmstart 7.8.1.8 Global Mirror, Metro Mirror
HU01564 S3 FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop 7.8.1.8 FlashCopy
HU01657 S3 The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart 7.8.1.8 Reliability Availability Serviceability
HU01715 S3 Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart 7.8.1.8 HyperSwap
HU01719 S3 Node warmstart due to a parity error in the HBA driver firmware 7.8.1.8 Reliability Availability Serviceability
HU01751 S3 When RAID attempts to flag a strip as bad and that strip has already been flagged a node may warmstart 7.8.1.8 RAID
HU01760 S3 FlashCopy map progress appears to be stuck at zero percent 7.8.1.8 FlashCopy
HU01786 S3 An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log 7.8.1.8 Drives
HU01790 S3 On the "Create Volumes" page the "Accessible I/O Groups" selection may not update when the "Caching I/O group" selection is changed 7.8.1.8 Graphical User Interface
HU01793 S3 The Maximum final size value in the Expand Volume dialog can display an incorrect value preventing expansion 7.8.1.8 Graphical User Interface
HU02028 S3 An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart 7.8.1.8 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
IT22591 S3 An issue in the HBA adapter firmware may result in node warmstarts 7.8.1.8 Reliability Availability Serviceability
IT24900 S3 Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service 7.8.1.8 Reliability Availability Serviceability
IT26836 S3 Loading drive firmware may cause a node warmstart 7.8.1.8 Drives
HU01802 S1 USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable 7.8.1.7 Encryption
HU01785 S2 An issue with memory mapping may lead to multiple node warmstarts 7.8.1.7
HU01792 S1 HIPER (Highly Pervasive): When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas, in the array, it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to the following Flash 7.8.1.6 Distributed RAID
HU01866 S1 HIPER (Highly Pervasive): A faulty PSU sensor, in an AC3 node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group 7.8.1.6 System Monitoring
HU01524 S1 When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up. 7.8.1.6 RAID
HU01767 S1 Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to the following Flash 7.8.1.6 RAID, Thin Provisioning
IT17919 S1 A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts 7.8.1.6 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01420 S2 An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive 7.8.1.6 Distributed RAID
HU01476 S2 A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed 7.8.1.6 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01623 S2 An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships 7.8.1.6 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01630 S2 When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts 7.8.1.6 FlashCopy
HU01697 S2 A timeout issue in RAID member management can lead to multiple node warmstarts 7.8.1.6 RAID
HU01771 S2 An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline 7.8.1.6 System Monitoring
HU01446 S3 Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart 7.8.1.6 Hosts
HU01472 S3 A locking issue in Global Mirror can cause a warmstart on the secondary cluster 7.8.1.6 Global Mirror
HU01619 S3 A misreading of the PSU register can lead to failure events being logged incorrectly 7.8.1.6 System Monitoring
HU01628 S3 In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading 7.8.1.6 Graphical User Interface
HU01664 S3 A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade 7.8.1.6 System Update
HU01698 S3 A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted. 7.8.1.6 Compression
HU01740 S3 The timeout setting for key server commands may be too brief when the server is busy causing those commands to fail 7.8.1.6 Encryption
HU01747 S3 The incorrect detection of a cache issue can lead to a node warmstart 7.8.1.6 Cache
HU00247 S1 A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99% 7.8.1.5 RAID, Distributed RAID
HU01620 S1 Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a Tier 2 may occur 7.8.1.5 Reliability Availability Serviceability
IC57642 S1 A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide 7.8.1.5 Reliability Availability Serviceability
IT19192 S1 An issue in the handling of GUI certificates may cause warmstarts leading to a Tier 2 recovery 7.8.1.5 Graphical User Interface, Reliability Availability Serviceability
IT23747 S2 For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance 7.8.1.5 Distributed RAID
HU01655 S3 The algorithm used to calculate an SSD's replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported 7.8.1.5 Drives
HU01679 S3 An issue in the RAID component can, very occasionally, cause a single node warmstart 7.8.1.5 RAID
HU01687 S3 For 'volumes by host', 'ports by host' and 'volumes by pool' pages in the GUI, when the number of items is greater than 50, then the item name will not be displayed 7.8.1.5 Graphical User Interface
HU01704 S3 In systems using HyperSwap a rare timing window issue can result in a node warmstart 7.8.1.5 HyperSwap
HU01724 S3 An I/O lock handling issue between nodes can lead to a single node warmstart 7.8.1.5 RAID
HU01729 S3 Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node unable to progress may warmstart 7.8.1.5 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01730 S3 When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter 7.8.1.5 GUI Fix Procedure
HU01731 S3 When a node is placed into service mode it is possible for all compression cards within the node to be marked as failed. 7.8.1.5 Compression
IT23140 S3 When viewing the licensed functions GUI page the individual calculations for SCUs, for each tier, may be wrong. However the total is correct 7.8.1.5 Graphical User Interface
HU01706 S1 HIPER (Highly Pervasive): Areas of volumes written with all-zero data may contain non-zero data. For more details refer to the following Flash 7.8.1.4
FLASH-22972 & FLASH-23088 & FLASH-23261 & FLASH-23660 & FLASH-24216 & FLASH-22700 S1 HIPER (Highly Pervasive): Multiple enhancements made for Configuration Check Error (CCE) detection and handling. 7.8.1.3 Reliability Availability Serviceability
FLASH-23211 S1 Staggered battery end of life is needed to ensure that both system batteries will not reach end of life simultaneously. 7.8.1.3 Reliability Availability Serviceability
HU01625 S1 In systems with a consistency group of HyperSwap or Metro Mirror relationships if an upgrade attempts to commit whilst a relationship is out of synch then there may be multiple warmstarts and a Tier 2 recovery 7.8.1.3 System Update, HyperSwap, Metro Mirror
HU01646 S1 A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster 7.8.1.3 Reliability Availability Serviceability
IT23034 S1 With hyperswap volumes and mirrored copies at a single site using rmvolumecopy to remove a copy may result in a cluster-wide warmstart necessitating a Tier 2 recovery 7.8.1.3 HyperSwap
FLASH-22356 S2 Validation should be performed on rebuild/xverify to avoid out of bound addresses. 7.8.1.3 RAID
FLASH-22664 S2 Improve interface adapter failure recovery mechanism to prevent failing the RAID controller. 7.8.1.3 RAID
FLASH-22901 S2 Certify failures should not cause both RAID controllers to be failed repeatedly. 7.8.1.3 RAID
FLASH-22939 S2 On the unlikely occasion that system node canisters are placed into service state and rebooted, there may be a corrupt array and dual canister failures. 7.8.1.3 Reliability Availability Serviceability
FLASH-22947 S2 Data sector check defeated by a corrupted field, which can lead to outages. 7.8.1.3 Reliability Availability Serviceability
HU01321 S2 Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring 7.8.1.3 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01481 S2 A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts 7.8.1.3 HyperSwap
HU01525 S2 During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable 7.8.1.3 Compression, System Update
HU01569 S2 When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes 7.8.1.3 Compression
HU01584 S2 An issue in array indexing can cause a RAID array to go offline repeatedly 7.8.1.3 RAID
HU01614 S2 After a node is upgraded hosts defined as TPGS may have paths set to inactive 7.8.1.3 Hosts
HU01632 S2 A congested fabric causes the Fibre Channel adapter firmware to abort I/O resulting in node warmstarts 7.8.1.3 Reliability Availability Serviceability
HU01638 S2 When upgrading to v7.6 or later if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail. 7.8.1.3 System Update
HU01645 S2 After upgrading to v7.8 a reboot of a node will initiate a continual boot cycle 7.8.1.3 System Update
FLASH-22737 S3 Increase efficiency in low level data mapping to reduce I/O response times in certain pathological workloads. 7.8.1.3 Performance
FLASH-22938 S3 Array may move into the offline state after the active canister is removed and re-inserted into the enclosure. 7.8.1.3 Reliability Availability Serviceability
FLASH-23256 S3 After upgrading to release 1.4.7.0, one of the Configuration Check Error (CCE) detection and handling engines on the flash module is not automatically started. 7.8.1.3 System Update
FLASH-23316 S3 Shutting the system down with the stopsystem -force command could cause a warmstart if any flash modules are in the failed state or a RAID controller is in the service state. 7.8.1.3
FLASH-23484 S3 (HIPER) Prevent rare case of flash module sector errors causing a system outage. 7.8.1.3 Reliability Availability Serviceability
FLASH-23578 S3 Recovery backup only usable on the canister node that the command was issued from. 7.8.1.3
FLASH-23580 S3 Prevent timed out rebuild tasks from causing interfaces to be failed erroneously. 7.8.1.3 RAID
FLASH-23894 S3 RAID controller failure could cause an warmstart. 7.8.1.3 RAID
HU01385 S3 A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy 7.8.1.3 HyperSwap
HU01535 S3 An issue with Fibre Channel driver handling of command processing can result in a node warmstart 7.8.1.3
HU01582 S3 A compression issue in IP replication can result in a node warmstart 7.8.1.3 IP Replication
HU01624 S3 GUI response can become very slow in systems with a large number of compressed and uncompressed volume 7.8.1.3 Graphical User Interface
HU01631 S3 A memory leak in Easy Tier when pools are in Balanced mode can lead to node warmstarts 7.8.1.3 EasyTier
HU01654 S3 There may be a node warmstart when a switch of direction in a HyperSwap relationship fails to complete properly 7.8.1.3 HyperSwap
HU01239 & HU01255 & HU01586 S1 HIPER (Highly Pervasive): The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access 7.8.1.2 Reliability Availability Serviceability
HU01626 S2 Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue 7.8.1.2 System Update
HU01505 S1 HIPER (Highly Pervasive): A non-redundant drive experiencing many errors can be taken offline, obstructing rebuild activity 7.8.1.1 Backend Storage, RAID
HU01528 S1 Both nodes may warmstart due to Sendmail throttling 7.8.1.1
IT15343 S2 Handling of memory allocation for the compression component can lead to warmstarts 7.8.1.1 Compression
IT19726 S2 Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled, preventing the HBA firmware from generating the completion for a FC command 7.8.1.1 Hosts
IT20627 S2 When Samsung RI drives are used as quorum disks a drive outage can occur 7.8.1.1 Quorum
HU01353 S3 CLI allows the input of carriage return characters into certain fields after cluster creation resulting in invalid cluster VPD and failed node adds 7.8.1.1 Command Line Interface
HU01396 S3 HBA firmware resources can become exhausted resulting in node warmstarts 7.8.1.1 Hosts
HU01462 S3 Systems with 12G SAS drives may experience a warmstart due to an uncorrectable error in the firmware 7.8.1.1 Drives
HU01484 S3 During a RAID array rebuild there may be node warmstarts 7.8.1.1 RAID
HU01496 S3 V9000 node type AC3 reports wrong FRU part number for compression accelerator 7.8.1.1 Command Line Interface
IT19973 S3 Call home emails may not be sent due to a failure to retry 7.8.1.1
HU01332 S4 Performance monitor and Spectrum Control show zero CPU utilisation for compression 7.8.1.1 System Monitoring
FLASH-12295 S1 Continuous and repeated loss of access of AC power on a PSU may, in rare cases, result in the report of a critical temperature fault. Using the provided cable secure mechanisms is highly recommended in preventing this issue 7.8.1.0 Reliability Availability Serviceability
HU01474 S1 Host writes to a read-only secondary volume trigger I/O timeout warmstarts 7.8.1.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU01479 S1 The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks 7.8.1.0 Distributed RAID
HU01483 S1 mkdistributedarray command may get stuck in the prepare state any interaction with the volumes in that array will result in multiple warmstarts 7.8.1.0 Distributed RAID
HU00747 S2 Node warmstarts can occur when drives become degraded 7.8.1.0 Backend Storage
HU01220 S2 Changing the type of a RC consistency group when a volume in a subordinate relationship is offline will cause a Tier 2 recovery 7.8.1.0 Global Mirror With Change Volumes, Global Mirror
HU01309 S2 For FC logins, on a node that is online for more than 200 days, if a fabric event makes a login inactive then the node may be unable to re-establish the login 7.8.1.0 Backend Storage
HU01371 S2 A remote copy command related to HyperSwap may hang resulting in an warmstart of the config node 7.8.1.0 HyperSwap
HU01388 S2 Where a HyperSwap volume is the source of a FlashCopy mapping and the HyperSwap relationship is out of sync, when the HyperSwap volume comes back online, a switch of direction will occur and FlashCopy operation may delay I/O leading to node warmstarts 7.8.1.0 HyperSwap, FlashCopy
HU01394 S2 Node warmstarts may occur on systems which are performing Global Mirror replication, due to a low-probability timing window 7.8.1.0 Global Mirror
HU01395 S2 Malformed URLs sent by security scanners whilst correctly discarded can cause considerable exception logging on config nodes leading to performance degradation that can adversely affect remote copy 7.8.1.0 Global Mirror
HU01413 S2 Node warmstarts when establishing an FC partnership between a system on v7.7.1 or later with another system which in turn has a partnership to another system running v6.4.1 or earlier 7.8.1.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU01416 S2 ISL configuration activity may cause a cluster-wide lease expiry 7.8.1.0 Reliability Availability Serviceability
HU01428 S2 Scheduling issue adversely affects performance resulting in node warmstarts 7.8.1.0 Reliability Availability Serviceability
HU01480 S2 Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI 7.8.1.0 Graphical User Interface, Command Line Interface
FLASH-17306 S3 An array with no spare did not report as degraded when a flash module was pulled 7.8.1.0 Reliability Availability Serviceability
FLASH-22143 S3 Improve stats performance to prevent SMNPwalk connection failure 7.8.1.0
HU01057 S3 Slow GUI performance for some pages as the lsnodebootdrive command generates unexpected output 7.8.1.0 Graphical User Interface
HU01227 S3 High volumes of events may cause the email notifications to become stalled 7.8.1.0 System Monitoring
HU01322 S3 Due to the way flashcard drives handle self-checking good status is not reported resulting in 1370 errors in the Event Log 7.8.1.0 System Monitoring
HU01404 S3 A node warmstart may occur when a new volume is created using fast format and foreground I/O is submitted to the volume 7.8.1.0
HU01445 S3 Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart 7.8.1.0
HU01463 S3 SSH Forwarding is enabled on the SSH server 7.8.1.0
HU01466 S3 Stretched cluster and HyperSwap I/O routing does not work properly due to incorrect ALUA data 7.8.1.0 HyperSwap, Hosts
HU01470 S3 T3 might fail during svcconfig recover -execute while running chemail if the email_machine_address contains a comma 7.8.1.0 Reliability Availability Serviceability
HU01473 S3 Easy Tier migrates an excessive number of cold extents to an overloaded nearline array 7.8.1.0 EasyTier
HU01487 S3 Small increase in read response time for GMCV source volumes with additional FlashCopy maps 7.8.1.0 FlashCopy, Global Mirror With Change Volumes
HU01497 S3 A drive can still be offline even though the error is showing as corrected in the Event Log 7.8.1.0 Distributed RAID
HU01498 S3 GUI may be exposed to CVE-2017-5638 (see Section 3.1) 7.8.1.0
IT19232 S3 Storwize systems can report unexpected drive location errors as a result of a RAID issue 7.8.1.0
FLASH-21880 S1 HIPER (Highly Pervasive): After both a rebuild read failure and a data reconstruction failure, a SCSI read should fail 7.8.0.2
HU01410 S1 An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state 7.8.0.2 FlashCopy
HU01225 & HU01330 & HU01412 S1 Node warmstarts due to inconsistencies arising from the way cache interacts with compression 7.8.0.2 Compression, Cache
HU01442 S1 Upgrading to v7.7.1.5 or v7.8.0.1 with encryption enable will result in multiple Tier 2 recoveries and a loss of access 7.8.0.2 Encryption, System Update
HU01461 S1 Arrays created using SAS attached 2.5 inch or 3.5 inch drives will not be encrypted. For more details refer to the following Flash 7.8.0.2 Encryption
HU00762 S2 Due to an issue in the cache component nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node 7.8.0.2 Reliability Availability Serviceability
HU01409 S2 Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over 7.8.0.2 Reliability Availability Serviceability
HU01426 S2 Systems running v7.6.1 or earlier, with compressed volumes, that upgrade to v7.8.0 or later will fail when the first node warmstarts and enters a service state 7.8.0.2 System Update
FLASH-21857 S3 Internal error found after upgrade 7.8.0.2 System Update
FLASH-22005 S3 Internal error encountered after the enclosure hit an out of memory error 7.8.0.2
HU01431 S3 Flash System statistics collection causes an Out-Of-Memory condition leading to a node warmstart 7.8.0.2 System Monitoring
HU01432 S3 Node warmstart due to an accounting issue within the cache component 7.8.0.2 Cache
IT18752 S3 When rmmdisk is used with "force" the validation process is supplied with incorrect parameters triggering a node warmstart 7.8.0.2 Command Line Interface
FLASH-21920 S4 CLI and GUI don't get updated with the correct flash module firmware version after flash module replacement 7.8.0.2 Graphical User Interface, Command Line Interface
HU01382 S1 Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access 7.8.0.1 Distributed RAID
HU00906 S1 When a compressed volume mirror copy is taken offline, write response times to the primary copy may reach prohibitively high levels leading to a loss of access to that volume 7.8.0.0 Compression, Volume Mirroring
HU01021 & HU01157 S1 A fault in a backend controller can cause excessive path state changes leading to node warmstarts and offline volumes 7.8.0.0 Backend Storage
HU01193 S1 A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting 7.8.0.0 Distributed RAID
HU01267 S1 An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an I/O group warmstarting 7.8.0.0 Global Mirror With Change Volumes
HU01320 S1 A rare timing condition can cause hung I/O leading to warmstarts on both nodes in an I/O group. Probability can be increased in the presence of failing drives 7.8.0.0 Hosts
HU01340 S1 A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade 7.8.0.0 System Update
HU01392 S1 Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a Tier 2 recovery 7.8.0.0 FlashCopy
HU01455 S1 VMWare hosts with ATS enabled can see LUN disconnects to volumes when GMCV is used 7.8.0.0 Global Mirror With Change Volumes
HU01635 S1 A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes 7.8.0.0 Hosts
HU01783 S1 Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550 7.8.0.0 Distributed RAID
HU01831 S1 Cluster-wide warmstarts may occur when the SAN delivers a FDISC frame with an invalid WWPN 7.8.0.0 Reliability Availability Serviceability
IT14917 S1 Node warmstarts due to a timing window in the cache component 7.8.0.0 Cache
HU00831 S2 Single node warmstart due to hung I/O caused by cache deadlock 7.8.0.0 Cache
HU01177 S2 A small timing window issue exists where a node warmstart or power failure can lead to repeated warmstarts of that node until a node rescue is performed 7.8.0.0 Reliability Availability Serviceability
HU01223 S2 The handling of a rebooted node's return to the cluster can occasionally become delayed resulting in a stoppage of inter cluster relationships 7.8.0.0 Metro Mirror
HU01254 S2 A fluctuation of input AC power can cause a 584 error on a node 7.8.0.0 Reliability Availability Serviceability
HU01347 S2 During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts 7.8.0.0 Thin Provisioning
HU01379 S2 Resource leak in the handling of Read Intensive drives leads to offline volumes 7.8.0.0
HU01381 S2 A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state 7.8.0.0 FlashCopy
HU01400 S2 When upgrading a V9000 with external 12F SAS enclosures to 7.7.1 the upgrade will stall 7.8.0.0 System Update
HU01516 S2 When node configuration data exceeds 8K in size some user defined settings may not be stored permanently resulting in node warmstarts 7.8.0.0 Reliability Availability Serviceability
IT16012 S2 Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance 7.8.0.0
IT17564 S2 All nodes in an I/O group may warmstart when a DRAID array experiences drive failures 7.8.0.0 Distributed RAID
FLASH-18607 S3 Call Home heartbeats are not being sent to Service Center 7.8.0.0
HU01098 S3 Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk 7.8.0.0 Backend Storage
HU01213 S3 The LDAP password is visible in the auditlog 7.8.0.0
HU01228 S3 Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries 7.8.0.0 Reliability Availability Serviceability
HU01230 S3 A host aborting an outstanding logout command can lead to a single node warmstart 7.8.0.0
HU01247 S3 When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result 7.8.0.0 FlashCopy
HU01264 S3 Node warmstart due to an issue in the compression optimisation process 7.8.0.0 Compression
HU01269 S3 A rare timing conflict between two process may lead to a node warmstart 7.8.0.0
HU01304 S3 SSH authentication fails if multiple SSH keys are configured on the client 7.8.0.0
HU01323 S3 Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart 7.8.0.0 Volume Mirroring
HU01370 S3 lsfabric command may not list all logins when it is used with parameters 7.8.0.0 Command Line Interface
HU01374 S3 Where an issue with Global Mirror causes excessive I/O delay a timeout may not function result in a node warmstart 7.8.0.0 Global Mirror
HU01399 S3 For certain config nodes the CLI Help commands may not work 7.8.0.0 Command Line Interface
HU01405 S3 SSD drives with vendor ID IBM-C050 and IBM-C051 are showing up as not being supported 7.8.0.0
IT18086 S3 When a volume is moved between I/O groups a node may warmstart 7.8.0.0

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.

5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide