Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 8.3 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.3.1.0 and 8.3.1.10. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 8th December 2023.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links

1. New Features

The following new features have been introduced in the 8.3.1 release: The following new feature has been introduced in the 8.3.1.2 release: The following new features have been introduced in the 8.3.1.3 release:

2. Known Issues and Restrictions

Details Introduced

Due to security improvements, when a customer upgrades to v8.3.1, for the first time, an event 3330 ("The superuser password must be changed") will be seen in the Event Log.

The prior superuser password will continue to be accepted until it is changed. Customers can delay changing the superuser password until it is convenient.

The action of changing the password will enable the improved security. If the customer wishes, the superuser password can be set to the same password.

8.3.1.0

Customers using Spectrum Control v5.3.2, or earlier, may notice some discrepancies between attributes displayed and related attributes on the Spectrum Virtualize GUI.

This issue will be resolved by a future release of Spectrum Control.

8.3.1.0

Under some circumstances, the "Monitoring > System" GUI screen may not load completely.

This is a known issue that will be lifted in a future PTF.

8.3.1.0

Customers using iSER attached hosts, with Mellanox 25G adapters, should be aware that IPv6 sessions will not failover, for example, during a cluster upgrade.

This is a known issue that may be lifted in a future PTF.

8.3.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

Reoccurring node warmstarts on systems with DRP that have been upgraded to 8.3.1.7 or 8.3.1.8.

This issue has been resolved by APAR HU02485.

8.3.1.9

Systems with 92-drive expansion enclosures can experience fan failures after the software upgrade. This is because system firmware has been updated to detect fans that are not operating correctly.

The action of changing the password will enable the improved security. If the customer wishes, the superuser password can be set to the same password.

8.3.1.4

There is an issue where the Spectrum Virtualize REST API output for action commands is in the standard CLI format and not JSON. This will result in failures of the IBM Ansible modules when creating objects.

This issue has been resolved in PTF v8.3.1.6.

8.3.1.3

Due to an issue in IP Replication, customers using this feature should not upgrade to v8.3.1.3 or later.

This issue has been resolved by APARs HU02340 in PTF v8.3.1.4.

8.3.1.3

Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256.

This issue has been resolved by APARs HU02303 & HU02305 in PTF v8.3.1.3.

8.3.1.0

When upgrading from v8.2.1, or earlier, to v8.3.0, or later, the CLI and GUI may incorrectly show all hosts offline.

Using the CLI svcinfo lshost <host id number> command for individual hosts will show the host is actually online.

This issue has been resolved by APAR HU02281 in PTF v8.3.1.3.

8.3.1.0

Customers using a Cisco Fibre Channel network, that is running NX-OS v8.3(2), or earlier, must not upgrade to Spectrum Virtualize v8.3.1 or later.

Please refer to the Flash for details.

This issue has been resolved by APAR HU02182 in PTF v8.3.1.2.

8.3.1.0

Customers with systems running v8.3.0.0, or earlier, using deduplication cannot upgrade to v8.3.0.1, or later, due to APAR HU02162.

This issue has been resolved by APAR HU02162 in PTF v8.3.1.3.

8.3.0.1

Validation in the Upload Support Package feature will reject the new case number format in the PMR field.

This issue has been resolved by APAR HU02392.

7.8.1.0

3. Issues Resolved

This release contains all of the fixes included in the 8.2.1.4 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2023-2597 7065011 8.3.1.10
CVE-2023-43042 7064976 8.3.1.10
CVE-2022-39167 6622025 8.3.1.9
CVE-2018-25032 6622021 8.3.1.8
CVE-2022-0778 6622017 8.3.1.7
CVE-2021-35603 6622019 8.3.1.7
CVE-2021-35550 6622019 8.3.1.7
CVE-2021-38969 6584337 8.3.1.7
CVE-2021-29873 6497111 8.3.1.6
CVE-2020-2781 6445063 8.3.1.3
CVE-2020-13935 6445063 8.3.1.3
CVE-2020-14577 6445063 8.3.1.3
CVE-2020-14578 6445063 8.3.1.3
CVE-2020-14579 6445063 8.3.1.3
CVE-2020-4686 6260199 8.3.1.2
CVE-2019-5544 6250889 8.3.1.0
CVE-2019-2964 6250887 8.3.1.0
CVE-2019-2989 6250887 8.3.1.0
CVE-2018-12404 6250885 8.3.1.0

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU02467 S1 When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire 8.3.1.9 Quorum
HU02471 S1 After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue 8.3.1.9 FlashCopy, Global Mirror with Change Volumes
HU02561 S1 If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur 8.3.1.9 FlashCopy
IT41088 S1 Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks 8.3.1.9 RAID
HU02010 S2 A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance 8.3.1.9 RAID
HU02485 S2 Reoccurring node warmstarts on systems with DRP that have been upgraded to 8.3.1.7 or 8.3.1.8 8.3.1.9 Data Reduction Pools, System Update
IT41835 S2 A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type 8.3.1.9 Drives
HU02306 S3 An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline 8.3.1.9 Hosts
HU02364 S3 False 989001 Managed Disk Group space warnings can be generated 8.3.1.9 System Monitoring
HU02367 S3 An issue with how RAID handles drive failures may lead to a node warmstart 8.3.1.9 RAID
HU02372 S3 Host SAS port 4 is missing from the GUI view on some systems. 8.3.1.9 GUI, SAS
HU02391 S3 An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server 8.3.1.9 Graphical User Interface
HU02443 S3 An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart 8.3.1.9 RAID
HU02453 S3 It may not be possible to connect to GUI or CLI without a restart of the Tomcat server 8.3.1.9 Command Line Interface, Graphical User Interface
HU02474 S3 An SFP failure can cause a node warmstart 8.3.1.9 Reliability Availability Serviceability
HU02499 S3 A pop up with the message saying 'The server was unable to process the request' may occur due to an invalid time stamp in the file used to provide the pop up reminder 8.3.1.9 GUI
HU02564 S3 The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct 8.3.1.9 Distributed RAID
HU02593 S3 NVMe drive is incorrectly reporting end of life due to flash degradation 8.3.1.9 Drives
HU02409 S1 If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive 8.3.1.7 iSCSI, Hosts
HU02410 S1 A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery 8.3.1.7 Hot Spare Node
HU02455 S1 After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery 8.3.1.7 3-Site
HU02343 S2 For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts 8.3.1.7 Backend Storage
HU02466 S2 An issue in the handling of drive failures can result in multiple node warmstarts 8.3.1.7 RAID
HU01209 S3 It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart 8.3.1.7 Storage Virtualisation
HU02433 S3 When a BIOS upgrade occurs excessive tracefile entries can be generated 8.3.1.7 System Update
HU02451 S3 An incorrect IP Quorum lease extention setting can lead to a node warmstart 8.3.1.7 IP Quorum
IT33996 S3 An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart 8.3.1.7 RAID
HU02296 S1 HIPER: The zero page functionality can become corrupt causing a volume to be initialised with non-zero data 8.3.1.6 Storage Virtualisation
HU02327 S1 HIPER: Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies.  For more details refer to the following Flash  8.3.1.6 Volume Mirroring
HU02384 S1 HIPER: In AC3 systems an inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access 8.3.1.6 Reliability Availability Serviceability
HU02400 S1 HIPER: A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area 8.3.1.6 Storage Virtualisation
HU02418 S1 HIPER: During a DRAID array rebuild data can be written to an incorrect location.  For more details refer to the following Flash  8.3.1.6 Distributed RAID, RAID
DT112601 S1 Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery 8.3.1.6 Storage Virtualisation
HU02226 S1 Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster 8.3.1.6 Data Reduction Pools
HU02342 S1 Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state 8.3.1.6 RAID
HU02393 S1 Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group 8.3.1.6 Storage Virtualisation
HU02397 S1 A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline 8.3.1.6 Data Reduction Pools
HU02401 S1 EasyTier can move extents between identical mdisks until one runs out of space 8.3.1.6 EasyTier
HU02406 S1 An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access.  For more details refer to the following Flash  8.3.1.6 Interoperability
HU02414 S1 Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily 8.3.1.6 Data Reduction Pools
HU02429 S1 System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI 8.3.1.6 System Monitoring
HU02326 S2 Delays in passing messages between nodes in an I/O group can adversely impact write performance 8.3.1.6 Performance
HU02345 S2 When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance 8.3.1.6 Metro Mirror, HyperSwap
HU02362 S2 When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted 8.3.1.6 RAID
HU02376 S2 FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes 8.3.1.6 FlashCopy
HU02377 S2 A race condition in DRP may stop IO being processed leading to timeouts 8.3.1.6 Data Reduction Pools
HU02422 S2 GUI performance can be degraded when displaying large numbers of volumes or other objects 8.3.1.6 Graphical User Interface
IT36792 S2 EasyTier can select a default performance profile for a drive which could cause too much hot data to be moved to lower tiers 8.3.1.6 EasyTier
IT38015 S2 During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts 8.3.1.6 RAID
HU02331 S3 Due to a threshold issue an error code 3400 may appear too often in the event log 8.3.1.6 Compression
HU02332 & HU02336 S3 When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart 8.3.1.6 Hosts
HU02366 S3 Slow internal resource reclamation by the RAID component can cause a node warmstart 8.3.1.6 RAID
HU02373 S3 Following a Data Reduction Pool recovery an internal flag setting may trigger random 1920 errors in the Event Log 8.3.1.6 Data Reduction Pools
HU02375 S3 An issue in how the GUI handles volume data can adversely impact its responsiveness 8.3.1.6 Graphical User Interface
HU02399 S3 Boot drives may be reported as having invalid state by the GUI, even though they are online 8.3.1.6 Graphical User Interface
HU02419 S3 During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string 8.3.1.6 Drives, Command Line Interface
HU02424 S3 Frequent GUI refreshing adversely impacts usability on some screens 8.3.1.6 Graphical User Interface
HU02360 S2 Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to the following Troubleshooting guide. 8.3.1.5 System Monitoring
HU02261 S1 HIPER: A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag.  For more details refer to the following Flash  8.3.1.4 Data Reduction Pools
HU02338 S1 HIPER: An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image 8.3.1.4 FlashCopy
HU02340 S1 HIPER: High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster 8.3.1.4 IP Replication
HU02282 S1 After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline 8.3.1.4 Cache
HU02315 S1 Failover for VMware iSER hosts may pause I/O for more than 120 seconds 8.3.1.4 Hosts
HU02321 S1 Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries 8.3.1.4 iSCSI
HU02322 S1 A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data 8.3.1.4 Data Reduction Pools
HU02323 S1 Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data of access to data 8.3.1.4 Data Reduction Pools
HU02153 S2 Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart 8.3.1.4 Hosts
HU02227 S2 Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline 8.3.1.4 Compression
HU02311 S2 An issue in volume copy flushing may lead to higher than expected write cache delays 8.3.1.4 Cache
HU02317 S2 A DRAID expansion can stall shortly after it is initiated 8.3.1.4 Distributed RAID
HU02095 S3 The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI 8.3.1.4 Graphical User Interface
HU02280 S3 Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown 8.3.1.4 System Monitoring
HU02292 & HU02308 S3 The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart 8.3.1.4 Global Mirror
HU02277 S1 HIPER (Highly Pervasive): RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss.  For more details refer to the following  Flash  8.3.1.3 RAID
HU02058 S1 Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery 8.3.1.3 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU02162 S1 When a node warmstart occurs during an upgrade from v8.3.0.0, or earlier, to 8.3.0.1, or later,  with dedup enabled it can lead to repeated node warmstarts across the cluster necessitating a Tier 3 recovery 8.3.1.3 Data Reduction Pools
HU02180 S1 When a svctask restorefcmap command is run on a VVol that is the target of another FlashCopy mapping both nodes in an I/O group may warmstart 8.3.1.3 vVols
HU02184 S1 When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command.  This command is not supported and may cause multiple node asserts, possibly cluster-wide 8.3.1.3 Backend Storage
HU02196 & HU02253 S1 A particular sequence of internode messaging delays can lead to a cluster wide lease expiry 8.3.1.3 Reliability Availability Serviceability
HU02210 S1 There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume 8.3.1.3 HyperSwap
HU02262 S1 Entering the CLI "applydrivesoftware -cancel" command may result in cluster-wide warmstarts 8.3.1.3 Drives
HU02266 S1 An issue in auto-expand can cause expansion to fail and the volume to be taken offline 8.3.1.3 Thin Provisioning
HU02295 S1 When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery 8.3.1.3 System Update
HU02390 S1 A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes 8.3.1.3 REST API
HU02156 S2 Global Mirror environments may experience more frequent 1920 events due to writedone message queuing 8.3.1.3 Global Mirror
HU02164 S2 An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted 8.3.1.3 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU02194 S2 Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password 8.3.1.3 Reliability Availability Serviceability
HU02201 & HU02221 S2 Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors 8.3.1.3 Drives
HU02248 S2 After upgrade from v8.3.0, or earlier, to v8.3.1, or later, the system may be unable to perform LDAP authentication 8.3.1.3 LDAP
II14767 S2 An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency 8.3.1.3 Cache
HU02208 S3 An issue with the handling of files by quorum can lead to a node warmstart 8.3.1.3 Quorum
HU02142 S3 It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing 8.3.1.3 Distributed RAID
HU02241 S3 IP Replication can fail to create IP partnerships via the secondary cluster management IP 8.3.1.3 IP Replication
HU02244 S3 False positive node error 766 (depleted CMOS battery) messages may appear in the Event Log 8.3.1.3 System Monitoring
HU02251 S3 A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence 8.3.1.3 iSCSI, Hosts
HU02281 S3 When upgrading from v8.2.1, or earlier, to v8.3.0, or later, the CLI and GUI may incorrectly  show all hosts offline. Checks from the host perspective will show them to be online 8.3.1.3 Hosts, System Update
HU02303 & HU02305 S3 Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume  and the ignored volumes have an id greater than 256 8.3.1.3 Hosts
HU02358 S3 An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart 8.3.1.3 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
IT32338 S3 Testing LDAP Authentication fails if username & password are supplied 8.3.1.3 LDAP
HU02182 S1 HIPER (Highly Pervasive): Cisco MDS switches with old firmware may refuse port logins leading to a loss of access.  For more details refer to the following Flash 8.3.1.2 System Update, Hosts, Backend Storage
HU02212 S1 HIPER (Highly Pervasive): Remote Copy secondary may have inconsistent data following a stop with -access due to a missing bitmap merge from FlashCopy to Remote Copy.  For more details refer to the following Flash  8.3.1.2 Global Mirror with Change Volumes, HyperSwap
HU02234 S1 HIPER (Highly Pervasive): An issue in HyperSwap Read Passthrough can cause multiple node warmstarts with the possibility of a loss of access to data 8.3.1.2 HyperSwap
HU02237 S1 HIPER (Highly Pervasive): Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption.  For more details refer to the following Flash  8.3.1.2 RAID
HU02238 S1 HIPER (Highly Pervasive): Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations.  For more details refer to the following Flash  8.3.1.2 FlashCopy, Global Mirror, Metro Mirror
HU01968 & HU02215 S1 An upgrade may fail due to corrupt hardened data in a node 8.3.1.2 System Update
HU02106 S1 Multiple node warmstarts, in quick succession, can cause the partner node to lease expire 8.3.1.2 Quorum, IP Quorum
HU02135 S1 Removing multiple IQNs for an iSCSI host can result in a Tier 2 recovery 8.3.1.2 iSCSI
HU02154 S1 If a node is rebooted, when remote support is enabled, then all other nodes will warmstart 8.3.1.2 Support Remote Assist
HU02202 S1 During an migratevdisk operation if Mdisk tiers in the target pool do not match those in the source pool then a Tier 2 recovery may occur 8.3.1.2 EasyTier
HU02207 S1 If hosts send more concurrent iSCSI commands than a node can handle then it may enter a service state (error 578) 8.3.1.2 iSCSI
HU02216 S1 When migrating or deleting a Change Volume of a RC relationship the system might be exposed to a Tier 2 (Automatic Cluster Restart) recovery. When deleting the Change Volumes, the T2 will re-occur which will place the nodes into a 564 state. The migration of the Change Volume will trigger a T2 and recover. For more details refer to the following Flash  8.3.1.2 Global Mirror with Change Volumes
HU02222 S1 Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to the following Flash  8.3.1.2 Global Mirror with Change Volumes
HU02242 S1 An iSCSI IP address, with a gateway argument of 0.0.0.0, is not properly assigned to each ethernet port and any previously set iSCSI IP address may be retained 8.3.1.2 iSCSI
IT32631 S1 Whilst upgrading the firmware for multiple drives an issue in the firmware checking can initiate a Tier 2 recovery 8.3.1.2 Drives
HU02128 S2 Deduplication volume lookup can over utilise resources causing an adverse performance impact 8.3.1.2 Data Reduction Pools, Deduplication
HU02168 S2 In the event of unexpected power loss a node may not save system data 8.3.1.2 Reliability Availability Serviceability
HU02203 S2 When a node reboots, it is possible for the node to be unable to communicate with some of the NVMe drives in the enclosure 8.3.1.2 Drives
HU02204 S2 After a Tier 2 recovery a node may fail to rejoin the cluster 8.3.1.2 Reliability Availability Serviceability
HU01931 S3 Where a high rate of CLI commands are received, it is possible for inter-node processing code to be delayed which results in a small increase in receive queue time on the config node 8.3.1.2 Performance
HU02015 S3 Some read-intensive SSDs are incorrectly reporting wear rate thresholds generating unnecessary errors in the Event Log 8.3.1.2 Drives
HU02091 S3 Upgrading to v8.2.1.8, or later, may result in a licensing error in the Event Log 8.3.1.2 Licensing
HU02137 S3 An issue with support for target resets in Nimble Storage controllers may cause a node warmstart 8.3.1.2 Backend Storage
HU02175 S3 A GUI issue can cause drive counts to be inconsistent and crash browsers 8.3.1.2 Graphical User Interface
HU02178 S3 IP Quorum hosts may not be shown in lsquorum command output 8.3.1.2 IP Quorum
HU02224 S3 When the RAID component fails to ree up memory rapidly enough for I/O processing there can be a single node warmstart 8.3.1.2 RAID
HU02341 S3 Cloud Callhome can become disabled due to an internal issue. A related error may not being recorded in the event log 8.3.1.2 System Monitoring
IT32440 S3 Under heavy I/O workload the processing of deduplicated I/O may cause a single node warmstart 8.3.1.2 Deduplication
IT32519 S3 Changing an LDAP user's password, in the directory, whilst this user is logged in to the GUI of a Spectrum Virtualize system may result in an account lockout in the directory, depending on the account lockout policy configured for the directory. Existing CLI logins via SSH are not affected 8.3.1.2 LDAP
HU01894 S1 HIPER (Highly Pervasive): After node reboot, or warmstart, some volumes accessed by AIX, VIO or VMware hosts, may experience stuck SCSI2 reservations on the NPIV failover ports of the partner node. This can cause a loss of access to data 8.3.1.0 Hosts
HU02075 S1 HIPER (Highly Pervasive): A FlashCopy snapshot, sourced from the target of an Incremental FlashCopy map, can sometimes, temporarily, present incorrect data to the host 8.3.1.0 FlashCopy
HU02104 S1 HIPER (Highly Pervasive): An issue in the RAID component, in the presence of very high I/O workload and the exhaustion of cache resources, can see a deadlock condition occurring which prevents further I/O processing. The system detects this issue and takes the storage pool offline for a six minute period, to clear the problem. The pool is then brought online automatically, and normal operation resumes. For more details refer to the following Flash  8.3.1.0 RAID
HU02141 S1 HIPER (Highly Pervasive): An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log.  For more details refer to the following Flash  8.3.1.0 Global Mirror
HU02205 S1 HIPER (Highly Pervasive): Incremental FlashCopy targets can be corrupted when the FlashCopy source is a target of a remote copy relationship 8.3.1.0 FlashCopy, Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01967 S1 When I/O, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an I/O group may warmstart 8.3.1.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01970 S1 When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted, with -force, then all nodes may repeatedly warmstart 8.3.1.0 Global Mirror with Change Volumes
HU02017 S1 Unstable inter-site links may cause a system-wide lease expiry leaving all nodes in a service state - one with error 564 and others with error 551 8.3.1.0 Reliability Availability Serviceability
HU02054 S1 The event log handler maintains a second list of events. On rare occasions, for log full events, these lists can get out of step, resulting in a Tier 2 recovery 8.3.1.0 System Monitoring
HU02063 S1 HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB 8.3.1.0 HyperSwap
HU02065 S1 Mishandling of Data Reduction Pool allocation request rejections can lead to node warmstarts that can take an MDisk group offline 8.3.1.0 Data Reduction Pools
HU02066 S1 If, during large (>8KB) reads from a host, a medium error is encountered, on backend storage, then there may be node warmstarts, with the possibility of a loss of access to data 8.3.1.0 Data Reduction Pools
HU02108 S1 Deleting a managed disk group, with -force, may cause multiple warmstarts with the possibility of a loss of access to data 8.3.1.0 Data Reduction Pools
HU02109 S1 Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers 8.3.1.0 Backend Storage, SCSI Unmap
HU02115 S1 Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data 8.3.1.0 Drives
HU02138 S1 An issue in Data Reduction Pool garbage collection can cause I/O timeouts leading to an offline pool 8.3.1.0 Data Reduction Pools
HU02141 S1 An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log 8.3.1.0 Global Mirror
HU02152 S1 Due to an issue in RAID there may be I/O timeouts, leading to node warmstarts, with the possibility of a loss of access to data 8.3.1.0 RAID
HU02197 S1 Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery 8.3.1.0 FlashCopy
IT29867 S1 If a change volume, for a remote copy relationship, in a consistency group, runs out of space whilst properties, of the consistency group, are being changed then a Tier 2 recovery may occur 8.3.1.0 Global Mirror with Change Volumes
IT31113 S1 After a manual power off and on, of a system, both nodes, in an I/O group, may repeatedly assert into a service state 8.3.1.0 RAID
IT31300 S1 When a snap collection reads the status of PCI devices a CPU can be stalled leading to a cluster-wide lease expiry 8.3.1.0 Support Data Collection
HU01890 S2 FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a 'copying' state 8.3.1.0 Global Mirror with Change Volumes
HU01923 S2 An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts 8.3.1.0 Global Mirror
HU01964 S2 An issue in the cache component may limit I/O throughput 8.3.1.0 Cache
HU02037 S2 A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped 8.3.1.0 FlashCopy
HU02132 S2 Removing a thin-provisioned volume and then immediately creating one of the same size may cause node warmstarts 8.3.1.0 Thin Provisioning
HU02143 S2 The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities, leading to that tier being overdriven 8.3.1.0 EasyTier
HU02169 S2 After a Tier 3 recovery, different nodes may report different UIDs for a subset of volumes 8.3.1.0 Hosts
HU02206 S2 Garbage collection can operate at inappropriate times, generating inefficient backend workload, adversely affecting flash drive write endurance and overloading nearline drives 8.3.1.0 Data Reduction Pools
HU01746 S3 Adding a volume copy may deactivate any associated MDisk throttling 8.3.1.0 Throttling
HU01796 S3 On AC3 systems the Battery Status LED may not illuminate 8.3.1.0 System Monitoring
HU01891 S3 An issue in DRAID grain process scheduling can lead to a duplicate entry condition that is cleared by a node warmstart 8.3.1.0 Distributed RAID
HU01943 S3 Stopping a GMCV relationship with the -access flag may result in more processing than is required 8.3.1.0 Global Mirror with Change Volumes
HU01953 S3 Following a Data Reduction Pool recovery, in some circumstances, it may not be possible to create new volumes, via the GUI, due to an incorrect value being returned from the lsmdiskgrp 8.3.1.0 Graphical User Interface
HU02021 S3 Disabling garbage collection may cause a node warmstart 8.3.1.0 Data Reduction Pools
HU02023 S3 An issue with the processing of FlashCopy map commands may result in a single node warmstart 8.3.1.0 Command Line Interface, FlashCopy
HU02026 S3 A timing window issue in the processing of FlashCopy status listing commands can cause a node warmstart 8.3.1.0 Command Line Interface, FlashCopy
HU02040 S3 VPD contains the incorrect FRU part number for the SAS adapter 8.3.1.0 Reliability Availability Serviceability
HU02048 S3 An issue in the handling of ATS commands from VMware hosts can cause a single node warmstart 8.3.1.0 Hosts
HU02052 S3 During an upgrade an issue, with buffer handling, in Data Reduction Pools can lead to a node warmstart 8.3.1.0 Data Reduction Pools
HU02062 S3 An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports, incorrectly, being reported offline 8.3.1.0 Hosts
HU02085 S3 Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios 8.3.1.0 Global Mirror
HU02102 S3 Excessive processing time required for FlashCopy bitmap operations, associated with large (>20TB) Global Mirror change volumes, may lead to a node warmstart 8.3.1.0 Global Mirror with Change Volumes
HU02111 S3 An issue with how Data Reduction Pool handles data, at the sub-extent level, may result in a node warmstart 8.3.1.0 Data Reduction Pools
HU02126 S3 There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node 8.3.1.0 Command Line Interface
HU02146 S3 An issue in inter-node message handling may cause a node warmstart 8.3.1.0 Reliability Availability Serviceability
HU02157 S3 Issuing a mkdistributedarray command may result in a node warmstart 8.3.1.0 Distributed RAID
HU02173 S3 During a pending fabric login, when an abort is received, it is possible for a related entry in the WWPN table to not be removed. The node will warmstart to clear this condition. 8.3.1.0 Reliability Availability Serviceability
HU02183 S3 An issue in the way inter-node communication is handled can lead to a node warmstart 8.3.1.0 Reliability Availability Serviceability
HU02190 S3 Error 1046 not triggering a Call Home even though it is a hardware fault 8.3.1.0 System Monitoring
HU02214 S3 Under a certain I/O pattern it is possible for metadata management in Data Reduction Pools to become inconsistent leading to a node warmstart 8.3.1.0 Data Reduction Pools
HU02285 S3 Single node warmstart due to cache resource allocation issue  8.3.1.0 Cache
IT21896 S3 Where encryption keys have been lost it will not be possible to remove an empty MDisk group 8.3.1.0 Encryption
IT30306 S3 A timing issue in callhome function initialisation may cause a node warmstart 8.3.1.0 System Monitoring

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.


5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide