Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 7.6 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.6.0.0 and 7.6.1.8. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 20 August 2020.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 7.6.0.0 release:

2. Known Issues and Restrictions

Details Introduced

Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue.

7.4.0.0

If using IP replication, please review the set of restrictions published in the Configuration Limits and Restrictions document for your product.

7.1.0.0

Windows 2008 host paths may become unavailable following a node replacement procedure.

Refer to this flash for more information on this restriction

6.4.0.0

Intra-System Global Mirror not supported.

Refer to this flash for more information on this restriction

6.1.0.0

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0.

Refer to this flash for more information

n/a

If an update stalls or fails then Contact IBM Support for Further Assistance

n/a
The following restrictions were valid for previous PTFs, but have now been lifted

iSCSI hosts cannot be connected to systems running V7.6.

7.6.0.0

Systems using HyperSwap cannot be upgraded to V7.6.

Prior to upgrading, any HyperSwap configuration will need to be removed and then recreated after the upgrade.

7.6.0.0

3. Issues Resolved

This release contains all of the fixes included in the 7.5.1.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs/FLASHs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2017-5647 ssg1S1010892 7.6.1.8
CVE-2017-5638 ssg1S1010113 7.6.1.8
CVE-2016-2183 ssg1S1010205 7.6.1.8
CVE-2016-5546 ssg1S1010205 7.6.1.8
CVE-2016-5547 ssg1S1010205 7.6.1.8
CVE-2016-5548 ssg1S1010205 7.6.1.8
CVE-2016-5549 ssg1S1010205 7.6.1.8
CVE-2016-6796 ssg1S1010114 7.6.1.7
CVE-2016-6816 ssg1S1010114 7.6.1.7
CVE-2016-6817 ssg1S1010114 7.6.1.7
CVE-2016-2177 ssg1S1010115 7.6.1.7
CVE-2016-2178 ssg1S1010115 7.6.1.7
CVE-2016-2183 ssg1S1010115 7.6.1.7
CVE-2016-6302 ssg1S1010115 7.6.1.7
CVE-2016-6304 ssg1S1010115 7.6.1.7
CVE-2016-6306 ssg1S1010115 7.6.1.7
CVE-2016-5696 ssg1S1010116 7.6.1.7
CVE-2016-2834 ssg1S1010117 7.6.1.7
CVE-2016-5285 ssg1S1010117 7.6.1.7
CVE-2016-8635 ssg1S1010117 7.6.1.7
CVE-2016-5385 ssg1S1009581 7.6.1.6
CVE-2016-5386 ssg1S1009581 7.6.1.6
CVE-2016-5387 ssg1S1009581 7.6.1.6
CVE-2016-5388 ssg1S1009581 7.6.1.6
CVE-2016-4461 ssg1S1010883 7.6.1.5
CVE-2016-1978 ssg1S1009280 7.6.1.5
CVE-2016-2107 ssg1S1009281 7.6.1.5
CVE-2016-2108 ssg1S1009281 7.6.1.5
CVE-2016-4430 ssg1S1009282 7.6.1.5
CVE-2016-4431 ssg1S1009282 7.6.1.5
CVE-2016-4433 ssg1S1009282 7.6.1.5
CVE-2016-4436 ssg1S1009282 7.6.1.5
CVE-2016-3092 ssg1S1009284 7.6.1.5
CVE-2017-6056 ssg1S1010022 7.6.1.3
CVE-2016-0475 ssg1S1005709 7.6.1.1
CVE-2015-1782 ssg1S1005710 7.6.1.1
CVE-2015-7181 ssg1S1005668 7.6.0.4
CVE-2015-7182 ssg1S1005668 7.6.0.4
CVE-2015-7183 ssg1S1005668 7.6.0.4
CVE-2015-3194 ssg1S1005669 7.6.0.4
CVE-2015-4872 ssg1S1005672 7.6.0.4
CVE-2015-2730 ssg1S1005576 7.6.0.0
CVE-2015-5209 ssg1S1005577 7.6.0.0
CVE-2015-3238 ssg1S1005581 7.6.0.0
CVE-2015-7575 ssg1S1005583 7.6.0.0

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU01479 S1 HIPER (Highly Pervasive): The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks 7.6.1.8 Distributed RAID
HU01505 S1 HIPER (Highly Pervasive): A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity 7.6.1.8 Backend Storage, RAID
HU01490 S1 When attempting to add/remove multiple IQNs to/from a host, the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups 7.6.1.8 iSCSI
HU01528 S1 Both nodes may warmstart due to Sendmail throttling 7.6.1.8
HU01549 S1 During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes 7.6.1.8 iSCSI, System Update
HU01572 S1 SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access 7.6.1.8 iSCSI
HU01480 S2 Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI 7.6.1.8 Graphical User Interface, Command Line Interface
HU01506 S2 Creating a volume copy with the -autodelete option can cause a timer scehduling issue leading to node warmstarts 7.6.1.8 Volume Mirroring
HU01569 S2 When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes 7.6.1.8 Compression
IT19726 S2 Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths becomes stalled preventing the HBA firmware from generating the completion for a FC command 7.6.1.8 Hosts
HU01228 S3 Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries 7.6.1.8 Reliability Availability Serviceability
HU01332 S3 Performance monitor and Spectrum Control show zero CPU utilisation for compression 7.6.1.8 System Monitoring
HU01391 S3 Storwize systems may experience a warmstart due to an uncorrectable error in the firmware 7.6.1.8 Drives
FLASH-21880 S1 HIPER (Highly Pervasive): After both a rebuild read failure and a data reconstruction failure, a SCSI read should fail 7.6.1.7
HU01447 S1 HIPER (Highly Pervasive): The management of FlashCopy grains during a restore process can miss some IOs 7.6.1.7 FlashCopy
FLASH-12295 S1 Continuous and repeated loss of access of AC power on a PSU may, in rare cases, result in the report of a critical temperature fault. Using the provided cable secure mechanisms is highly recommended in preventing this issue 7.6.1.7 Reliability Availability Serviceability
HU01193 S1 A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting 7.6.1.7 Distributed RAID
HU01225 & HU01330 & HU01412 S1 Node warmstarts due to inconsistencies arising from the way cache interacts with compression 7.6.1.7 Compression, Cache
HU01410 S1 An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state 7.6.1.7 FlashCopy
HU01499 S1 When an offline volume copy comes back online, under rare conditions, the flushing process can cause the cache to enter an invalid state, delaying I/O, and resulting in node warmstarts 7.6.1.7 Volume Mirroring, Cache
HU01783 S1 Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple T2 recoveries putting all nodes in service state with error 564 and/or 550 7.6.1.7 Distributed RAID
HU00762 S2 Due to an issue in the cache component nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node 7.6.1.7 Reliability Availability Serviceability
HU01262 S2 Cached data for a HyperSwap volume may only be destaged from a single node in an I/O group 7.6.1.7 HyperSwap
HU01409 S2 Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over 7.6.1.7 Reliability Availability Serviceability
IT14917 S2 Node warmstarts due to a timing window in the cache component. For more details, refer to the following Flash 7.6.1.7 Cache
FLASH-17306 S3 An array with no spare did not report as degraded when a flash module was pulled 7.6.1.7 Reliability Availability Serviceability
FLASH-21857 S3 Internal error found after upgrade 7.6.1.7 System Update
FLASH-22005 S3 Internal error encountered after the enclosure hit an out of memory error 7.6.1.7
FLASH-22143 S3 Improve stats performance to prevent SMNPwalk connection failure 7.6.1.7
HU00831 S3 Single node warmstart due to hung I/O caused by cache deadlock. 7.6.1.7 Cache
HU01022 S3 Fibre channel adapter encountered a bit parity error resulting in a node warmstart 7.6.1.7 Hosts
HU01247 S3 When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result 7.6.1.7 FlashCopy
HU01399 S3 For certain config nodes the CLI Help commands may not work 7.6.1.7 Command Line Interface
HU01432 S3 Node warmstart due to an accounting issue within the cache component 7.6.1.7 Cache
FLASH-21920 S4 CLI and GUI don't get updated with the correct flash module firmware version after flash module replacement 7.6.1.7 Graphical User Interface, Command Line Interface
HU01109 S2 Multiple nodes can experience a lease expiry when a FC port is having communications issues 7.6.1.6
HU01219 S2 Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware 7.6.1.6
HU01221 S2 Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware 7.6.1.6
HU01226 S2 Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access 7.6.1.6 Global Mirror
HU01245 S2 Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart 7.6.1.6 Global Mirror With Change Volumes
IT16012 S2 Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance 7.6.1.6
HU01050 S3 DRAID rebuild incorrectly reports event code 988300 7.6.1.6 Distributed RAID
HU01063 S3 3PAR controllers do not support OTUR commands resulting in device port exclusions 7.6.1.6 Backend Storage
HU01187 S3 Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times 7.6.1.6
HU01234 S3 After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI 7.6.1.6 iSCSI
HU01258 S3 A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration 7.6.1.6 Compression
HU01353 S3 CLI allows the input of carriage return characters into certain fields after cluster creation resulting in invalid cluster VPD and failed node adds 7.6.1.6 Command Line Interface
FLASH-17825 S2 Internal error results when there are more than 8 storage enclosures 7.6.1.5
HU00271 S2 An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts 7.6.1.5 Global Mirror
HU01140 S2 Easy Tier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance 7.6.1.5 Easy Tier
HU01141 S2 Node warmstarts, possibly due to a network problem, when a CLI "mkippartnership" is issued. This may lead to loss of the configuration node, requiring a configuration data recovery operation 7.6.1.5 IP Replication
HU01182 S2 Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command 7.6.1.5
HU01183 S2 When removing multiple MDisks, a configuration data recovery operation may occur 7.6.1.5
HU01210 S2 A small number of systems have broken or disabled TPMs. For these systems, the generation of a new master key may fail preventing the system joining a cluster 7.6.1.5
HU01223 S2 The handling of a rebooted node's return to the cluster can occasionally become delayed resulting in a stoppage of inter cluster relationships 7.6.1.5 Metro Mirror
IT16337 S2 Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart 7.6.1.5
FLASH-17813 S3 CLI output for the "lsenclosurebattery" command erroneously reports that the replacement battery is online immediately after being installed 7.6.1.5 Command Line Interface
FLASH-18607 S3 Storage enclosures email state shows "disabled" following a CCU 7.6.1.5 System Update
FLASH-19616 S3 The GUI notification engine hangs causing issues with DMPs 7.6.1.5 GUI Fix Procedure
HU01017 S3 The result of CLI commands are sometimes not promptly presented in the GUI 7.6.1.5 Graphical User Interface
HU01074 S3 An unresponsive "testemail" command, possible due to a congested network, may result in a single node warmstart 7.6.1.5 System Monitoring
HU01089 S3 The “svcconfig backup” CLI command fails when an I/O group name contains a hyphen 7.6.1.5
HU01110 S3 SVC supports SSH connections using RC4 based ciphers 7.6.1.5
HU01194 S3 A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing 7.6.1.5 VVols
HU01198 S3 Running the "comprestimator svctask analyzevdiskbysystem" command may cause the configuration node to warmstart 7.6.1.5 Comprestimator
HU01212 S3 GUI displays an incorrect time zone description for Moscow 7.6.1.5 Graphical User Interface
HU01214 S3 GUI and snap missing Easy Tier heat map information 7.6.1.5 Graphical User Interface, Support Data Collection
FLASH-19408 S4 A snap only collects SSD "smartctl" output from the configuration node 7.6.1.5 Support Data Collection
FLASH-19886 S4 The GUI System page does not display offline battery info properly 7.6.1.5 Graphical User Interface
FLASH-9798 S5 The CLI command "lsdrive drive_id" output does not reflect an updated "firmware_level" field after upgrade 7.6.1.5
FLASH-16264 S5 The PSU DMP with error code 1298 and event ID 085007 shows that the event is not fixed when it is marked as fixed 7.6.1.5 GUI Fix Procedure
FLASH-19255 S5 CCU Stalled with internal errors 7.6.1.5 System Update
FLASH-17998 S1 Internal error during error handling causes loss of access. 7.6.1.4
FLASH-17894 S1 Just a Bunch Of Flash (JBOF) errors incorrectly shows an XBAR fail causing loss of system access. 7.6.1.4
FLASH-17921 & FLASH-16402 S1 Incorrect device discovery during CCU can cause access and data loss. 7.6.1.4 System Update
HU00990 S1 A node warmstart on a cluster, with Global Mirror (GM) secondary volumes, can also result in a delayed response to hosts that are performing I/O to the GM primary volumes. 7.6.1.4 Global Mirror
HU01181 S1 Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details, refer to the following Flash 7.6.1.4 Compression
FLASH-17718 & FLASH-17948 S2 Internal error encountered with "Assert File /build/tms/lodestone15B/160225_0020/src/user/ic/icee_smi.c Line 889 Info (null)." 7.6.1.4
FLASH-17874 & FLASH-17873 S2 Prevent base SVC install packages from running on V9000. 7.6.1.4
FLASH-17957 & FLASH-15652 S2 After High Temp Shutdown of system, Array did not come back online. 7.6.1.4 Reliability Availability Serviceability
HU01060 S2 Prior warmstarts, perhaps due to a hardware error, can induce a "dormant state" within the FlashCopy code that may result in further warmstarts. 7.6.1.4 FlashCopy
HU01165 S2 Node warmstarts leading to both nodes going into 564 error state. 7.6.1.4 Thin Provisioning
HU01180 S2 When creating a snapshot on an ESX host, using VVols, a data recovery may occur. 7.6.1.4 Hosts, VVols
FLASH-16718 S3 For earlier 7.6 versions, the Power Interposer Board (PIB) cannot be replaced in the field without the array being removed on the enclosure. 7.6.1.4 Reliability Availability Serviceability
FLASH-17500 & FLASH-18051 S3 Internode communication issue causes CCU to stall. 7.6.1.4 System Update
FLASH-17633 S3 Rare warmstart encountered. 7.6.1.4
FLASH-17812 S3 Battery failure Directed Maintenance Procedure (DMP) for error code 1114 indicates "Unable to proceed". 7.6.1.4
FLASH-17856 S3 DMP for error 1114 for battery fault does not wait for a low charge battery FRU to charge. 7.6.1.4
FLASH-17887 & FLASH-15761 S3 Repeated CRC errors between interface and XBAR can fail flash module. 7.6.1.4
FLASH-18097 S3 Emulex HBA update for SVC nodes to handle multiple illegal frames. 7.6.1.4
FLASH-17494 & FLASH-10604 S3 Call home backlog held up newer call home event from reaching Service Center due to processing on the SVC nodes. 7.6.1.4
FLASH-17859 S3 Battery "percent_charge" stays at old value if battery is removed or goes offline. 7.6.1.4
FLASH-18220 S3 Cluster created with an 8x4 configuration reports degraded path with the 8th node included. 7.6.1.4
HU01046 S3 Storage pool free capacity may be incorrectly reported by the CLI. 7.6.1.4
HU01072 S3 In certain configurations, throttling too much may result in dropped I/O (Input/Output), which can lead to a single node warmstart. 7.6.1.4
HU01096 S3 Batteries may be seen to continuously recondition. 7.6.1.4
HU01104 S3 When using GMCV relationships, if a node in an I/O group loses communication with its partner, it may warmstart. 7.6.1.4 Global Mirror With Change Volumes
HU01143 S3 Where nodes are missing configuration files, some services will be prevented from starting. 7.6.1.4
HU01144 S3 Single node warmstart on the configuration node due to GUI contention. 7.6.1.4 Graphical User Interface
IT15366 S3 CLI command “lsportsas” may show unexpected port numbering. 7.6.1.4
FLASH-16113 & FLASH-16943 S4 Recover system links to blank page. 7.6.1.4 Graphical User Interface
FLASH-17833 & FLASH-16135 S4 PLIC & PLTX has issues and requires enhancements. 7.6.1.4
FLASH-16878 S4 Available capacity value is wrong. 7.6.1.4
FLASH-18145 S4 The Update GUI panel gives the example for an SVC update package rather than a V9000 update package. 7.6.1.4 Graphical User Interface
FLASH-17886 & FLASH-17857 S5 Reads of Invalid pages in JBOF are not logged. 7.6.1.4
FLASH-16313 S1 Authentication bypass using HTTP verb tampering is possible. 7.6.1.3
FLASH-17149 S1 PSoC issues eventually lead to both canisters going into service state. 7.6.1.3
FLASH-17656 S1 Degraded components are included in the system thermal algorithm. 7.6.1.3
FLASH-17135 S1 Issues result when the same call home manager processes run simultaneously. 7.6.1.3
FLASH-17650 S1 Improve internal Flash checking to prevent access loss. 7.6.1.3
FLASH-17418 S2 Improve error handling of unresponsive flash chip. 7.6.1.3
FLASH-17732 S2 A canister node goes into service state 574 after a battery is degraded. 7.6.1.3
FLASH-17478 S2 Upgrade failed with the message "Unable to communicate with Systemmgr." 7.6.1.3
FLASH-16051 S2 Fix interface error reporting. 7.6.1.3
FLASH-16787 S2 Systems created in 7.4.1.x are at risk of quorum devices having incorrect short lease expiry timeout and could lease expire on node failover. 7.6.1.3
HU01016 & HU01088 S2 Node warmstarts can occur when a port scan is received on port 1260. 7.6.1.3
HU01090 S2 Dual node warmstart due to issues with the call home process. 7.6.1.3
HU01091 S2 An issue with the CAW lock processing, under high SCSI-2 reservation workloads, may cause node warmstarts. 7.6.1.3 Hosts
FLASH-17324 S3 Call home configuration cannot complete if network infrastructure is not ready. 7.6.1.3
FLASH-17528 S3 Double allocation of memory without the necessary free space leads to memory allocation failure. 7.6.1.3
HU01028 S3 Processing of "lsnodebootdrive" output may adversely impact management GUI performance. 7.6.1.3 Graphical User Interface
HU01030 S3 Incremental FlashCopy® always requires a full copy. 7.6.1.3 FlashCopy
HU01042 S3 Single node warmstart due to 16Gb HBA firmware behaviour. 7.6.1.3
HU01052 S3 GUI operation with large numbers of volumes may adversely impact performance. 7.6.1.3
HU01064 S3 Management GUI incorrectly displays FC mappings that are part of GMCV relationships. 7.6.1.3 Graphical User Interface
HU01076 S3 Where hosts share volumes using a particular reservation method, if the maximum number of reservations is exceeded, this may result in a single node warmstart. 7.6.1.3 Hosts
HU01080 S3 Single node warmstart due to an I/O timeout in cache. 7.6.1.3
HU01081 S3 When removing multiple nodes from a cluster, a remaining node may warmstart. 7.6.1.3
HU01087 S3 With a partnership stopped at the remote site, the GUI stop button at the local site will be disabled. 7.6.1.3 Global Mirror, Metro Mirror
HU01092 S3 Systems which have undergone particular upgrade paths may be blocked from upgrading to version 7.6. 7.6.1.3 System Update
HU01094 S3 Single node warmstart due to rare resource locking contention. 7.6.1.3 Hosts
IT14922 S3 A memory issue, related to the e-mail feature, may cause nodes to warmstart or go offline. 7.6.1.3
FLASH-16473 S4 The Firmware Versions section of system report is missing management FPGA information. 7.6.1.3
FLASH-6171 S4 Canister nodes should reboot in certain error conditions instead of requiring customer action. 7.6.1.3
HU01051 S1 Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster. 7.6.1.1 Global Mirror
FLASH-16474 S2 Degraded components are still used in the thermal algorithm. 7.6.1.1
HU01027 S2 Single node warmstart, or unresponsive GUI, when creating compressed volumes. 7.6.1.1 Compression
HU01062 S2 A data recovery operation may occur when max replication delay is used and remote copy I/O is delayed. 7.6.1.1 Global Mirror, Global Mirror With Change Volumes
HU01067 S2 Node warmstarts in two I/O groups when using HyperSwap® on 7.6 releases. 7.6.1.1
HU01086 S2 SVC reports incorrect SCSI TPGS data in an 8 node cluster causing host multi-pathing software to receive errors which may result in host outages. 7.6.1.1 Hosts
HU01090 S2 Dual node warmstart due to issue with call home process. 7.6.1.1
FLASH-15719 S3 In certain cases, "chencryption" and "lsdrive" commands will restart the cluster. 7.6.1.1
HU01073 S3 SVC CG8 nodes have internal SSDs but these are not displayed in "internal storage" page. 7.6.1.1
FLASH-13735 S5 The GUI for powering off a SVC node produces a misleading message. 7.6.1.1 Graphical User Interface
FLASH-15498 S5 The GUI system view is missing some colons for Power Supply information. 7.6.1.1 Graphical User Interface
FLASH-16111 S1 Node failover during upgrade of replacement module can cause all modules to be reconfigured resulting in data loss. 7.6.0.4
FLASH-15745 S2 Inquiry command after LUN reset incorrectly returned Unit Attention. 7.6.0.4
FLASH-15671 S2 An MDisk goes offline when writing the end of memory with a fully allocated array while rebuild is in progress. 7.6.0.4
FLASH-15360 S2 Fault the Interface when FPGA buffer allocates or frees twice. 7.6.0.4
FLASH-15448 S2 Marking the event "Array mdisk is not protected by sufficient spares" as fixed should only fix the event in a system with three flash modules. 7.6.0.4
FLASH-15204 S2 DMP does not properly format Flash Module after replacement. 7.6.0.4
FLASH-15287 S2 A flashcard goes unresponsive when array certify is taking corrective action due to an error reporting issue. 7.6.0.4
FLASH-16210 S2 A DMP attempted to format the incorrect drive when fixing error code 1690. 7.6.0.4
HU01027 S2 Single node warmstart, or unresponsive GUI, when creating compressed volumes. 7.6.0.4 Compression
HU01056 S2 Both nodes in the same I/O group warmstart when using VVols. 7.6.0.4 VVols
HU01069 S2 Node warmstart after upgrade to v7.6. 7.6.0.4 System Update
FLASH-15975 S3 Enable update to 7.6 firmware from 7.5.1.2 and 7.4.1.3 releases. 7.6.0.4 System Update
FLASH-15866 S3 Node 574 error on reboot. 7.6.0.4
FLASH-15861 S3 A rare scenario finds sequential fail logic to be too aggressive. 7.6.0.4
FLASH-15488 S3 Both nodes warmstart during power up. 7.6.0.4
FLASH-15326 S3 Interface improvements. 7.6.0.4
FLASH-15254 S3 Improve signal margin on interface to/from RAID controller links. 7.6.0.4
FLASH-15188 S3 Sending test email wizard fails with "CMMVC5709E [-1] is not a supported parameter." 7.6.0.4 System Monitoring
HU01029 S3 Where a boot drive has been replaced, with a new unformatted one, on a DH8 node the node may warmstart when the user logs in as superuser to the CLI via its service IP or they log into the node via the service GUI. Additionally, where the node is the configuration node this may happen when user logs in, as superuser, to the cluster via CLI or management GUI. 7.6.0.4
FLASH-15992 S4 Internal service errors found upon system set up. 7.6.0.4
FLASH-16122 S4 The DMP for event 085012, error code 1067 has identification and description errors. 7.6.0.4
FLASH-15468 S4 The wrong product is given as example for the test utility and update files in the GUI. 7.6.0.4 Graphical User Interface
FLASH-15123 S5 The field "Description" on the GUI easy setup has an inconsistent name on different panels. 7.6.0.4 Graphical User Interface
FLASH-15672 S1 Data stalls are possible with a fully allocated array while rebuild is in progress. 7.6.0.3
HU01034 S3 Single node warmstart stalls upgrade for systems presenting storage to large VMware configurations using SCSI CAW. 7.6.0.3 System Update
HU01032 S2 Batteries going on and offline can take node offline. 7.6.0.2
HU00926 & HU00989 S2 Where an array is not experiencing any I/O, a drive initialisation may cause node warmstarts. 7.6.0.2
FLASH-14454 S3 Potential I/O stall for more than 30 seconds during upgrade. 7.6.0.2
FLASH-15172 S4 Pools and I/O groups do not appear in the drop down when creating a HyperSwap volume if a long site name is configured. 7.6.0.2
FLASH-15328 S4 CLI help files were not translated in the 7.6.0.1 release. 7.6.0.2
FLASH-13706 S1 Potential undetected data corruption may occur due to a low probability race condition. The race condition has been observed on a system with a specific workload that is doing 1 to 2 GB/s of read operations with 250 MB/s of write operations. The write operations were less than 4K in size. 7.6.0.1
HU00733 S1 Stopping a remote copy with -access resulted in a node warmstart after a recoveryvdiskbysystem command. 7.6.0.1 Global Mirror
HU00749 S1 Multiple node warmstarts in I/O group after starting Remote Copy. 7.6.0.1 Global Mirror
HU00757 S1 Multiple node warmstarts when removing a Global Mirror relationship with secondary volume that has been offline. 7.6.0.1 Global Mirror
HU00819 S1 Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues. 7.6.0.1 Global Mirror
HU01004 S1 Multiple node warmstarts when space efficient volumes are running out of capacity. 7.6.0.1 Cache
FLASH-13779 S2 Repeated interface panics causes an incorrect failure. 7.6.0.1
HU00740 S2 Increased CPU utilisation on nodes from Easy Tier processing. 7.6.0.1 Easy Tier
HU00823 S2 Node warmstart due to inconsistent Easy Tier status when Easy Tier is disabled on all managed disk group. 7.6.0.1 Easy Tier
HU00827 S2 Both nodes in a single I/O group of a multi I/O group system can warmstart due to misallocation of volume stats entries. 7.6.0.1
HU00838 S2 FlashCopy volume offline due to a cache flush issue. 7.6.0.1 Global Mirror With Change Volumes
HU00900 S2 SVC FC driver warmstarts when it receives an unsupported but valid FC command. 7.6.0.1 Hosts
HU00908 S2 Battery can charge too quickly on reconditioning and take node offline. 7.6.0.1
HU00909 S2 Single node warmstart when removing an MDisk group that had Easy Tier activated. 7.6.0.1 Easy Tier
HU00915 S2 Loss of access to data when removing volumes associated with a GMCV relationship. 7.6.0.1 Global Mirror With Change Volumes
HU00991 S2 Performance impact on read pre-fetch workloads due to cache pre-fetch changes. 7.6.0.1
HU00999 S2 FlashCopy volumes may go offline during an upgrade. 7.6.0.1 FlashCopy
HU01001 S2 CCU checker causes both nodes to warmstart. 7.6.0.1 System Update
HU01002 S2 16Gb HBA causes multiple node warmstarts when unexpected FC frame content received. 7.6.0.1 Reliability Availability Serviceability
HU01003 S2 Increase in workload may cause nodes to warmstart. 7.6.0.1 Cache
HU01005 S2 Unable to remove ghost MDisks. 7.6.0.1 Backend Storage
HU01434 S2 A node port can become excluded, when its login status changes, leading to a load imbalance across available local ports. 7.6.0.1 Backend Storage
HU00935 S3 A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time. 7.6.0.1 FlashCopy
FLASH-12079 S3 A node timeout causes the Flash to fail. 7.6.0.1
FLASH-13052 S3 FC interface timeout issue results in a temporary error. 7.6.0.1
FLASH-14926 S3 Unexpected node warmstart. 7.6.0.1
FLASH-13201 S3 Failed encryption validation causes a VM timeout. 7.6.0.1
FLASH-12463 S3 Flash failure due to Gateway to node CRC errors. 7.6.0.1
FLASH-11546 S3 Flash card failures are a result of an unexpected power off. 7.6.0.1
FLASH-13325 S3 Mitigate flashcard encryption error. 7.6.0.1
FLASH-12384 S3 Performance improvements. 7.6.0.1
HU00470 S3 Single node warmstart on login attempt with incorrect password issued. 7.6.0.1
HU00536 S3 When stopping a GMCV relationship clean up process at secondary site hangs to the point of a primary node warmstart. 7.6.0.1 FlashCopy
HU00756 S3 SVC reports invalid counters causing TPC to skip sample periods. 7.6.0.1 System Monitoring
HU00794 S3 Hang up of GM I/O stream can effect MM I/O in another remote copy stream. 7.6.0.1 Global Mirror, Metro Mirror
HU00828 S3 FlashCopy performance degraded. 7.6.0.1 FlashCopy
HU00899 S3 Node warmstart observed when 16 Gb FC or 10 Gb FCoE adapter detects heavy network congestion. 7.6.0.1 Hosts
HU00903 S3 Emulex firmware paused causes single node warmstart. 7.6.0.1
HU00923 S3 Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters. 7.6.0.1 Reliability Availability Serviceability
HU00973 S3 Single node warmstart when concurrently creating new volume host mappings. 7.6.0.1 Hosts
HU00998 S3 Support for Fujitsu Eternus DX100 S3 controller. 7.6.0.1 Backend Storage
HU01000 S3 SNMP and Call home stops working when node reboots. 7.6.0.1 System Monitoring
HU01006 S3 Volume hosted on Hitachi controllers show high latency due to high I/O concurrency. 7.6.0.1 Backend Storage
HU01007 S3 When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart. 7.6.0.1 FlashCopy, Global Mirror With Change Volumes
HU01008 S3 Single node warmstart during code upgrade. 7.6.0.1 System Update
IC85931 S3 The exact file requested is missing when copying statistic files between nodes presents event ID 980440. 7.6.0.1
IT10251 S3 Freeze time update delayed after reduction of cycle period. 7.6.0.1 Global Mirror With Change Volumes
HU00732 S3 Single node warmstart due to hung RC recovery caused by pinned write I/Os on incorrect queue. 7.6.0.1
HU00975 S3 Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks. 7.6.0.1 FlashCopy
FLASH-13754 S4 The upgrade utility does not report a failed drive. 7.6.0.1 System Update
FLASH-14716 S4 Service manager panic experienced during upgrade. 7.6.0.1 System Update
FLASH-13091 S4 Properly cabled fixed configurations always raise a false "Minimum cabling rules not met" alert. 7.6.0.1
FLASH-12906 S4 Stalled upgrade reports an upgrade failure error even after the upgrade completes successfully. 7.6.0.1 System Update
FLASH-14952 S4 Incomplete "chenclosurecanister" command can cause the nodes to warmstart. 7.6.0.1
FLASH-13654 S4 A node warmstart is experienced during upgrade. 7.6.0.1 System Update
FLASH-13576 S5 A mandatory parameter is not included in the "ping" CLI help. 7.6.0.1

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.

5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide