Release Note for SAN Volume Controller and Storwize Family Block Storage Products


This is the release note for the 7.5.0 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.5.0.0 and 7.5.0.14. This document will be updated with additional information whenever a new PTF is released.

This document was last updated on 20 August 2020.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 7.5.0.0 release: The following new features have been introduced in the 7.5.0.4 release:

2. Known Issues and Restrictions

Note: For clarity, the term "node" will be used to refer to a SAN Volume Controller node or Storwize system node canister.
Details Introduced
During upgrade, node failover does not bring up the normal alert message requiring a refresh of the GUI. Customers will need to manually refresh the GUI upon upgrade to v7.5.0.14.

This is a temporary restriction that will be lifted in a future PTF.

7.5.0.14
v7.5.0.14 is only for installation on SVC 8A4/8G4 nodes and Storwize V3500/V3700 systems with 4GB of RAM. 7.5.0.14
v7.5.0.4 is only for installation on Storwize V3500 systems.

It should be noted that V3500 systems with 4GB of memory will need to upgrade to v7.5.0.4 or a later v7.5.0 PTF before attempting to increase memory to 8GB.

7.5.0.4
Systems with more than 32 WWPNs in a single host object must reconfigure the host before upgrading to v7.5.0.

Refer to this Technote for more information on this restriction

7.5.0.0
Software update to v7.5.0. may fail if the system date is set to any date prior to December 2014 with the following message:

CMMVC6347E The specific update package cannot be installed on this hardware level

n/a
Updates from releases prior to 7.4.0.0, to 7.4.0.0 or later will be required to run an additional update step (if the nodes contain 8GB or more RAM). The update will complete in the following order:
  1. Phase one is a standard update, taking approximately 30 minutes per node plus a 30 minute wait half way through the update (e.g. 4.5 hours for a system with 8 nodes or 4 control enclosures);
  2. Once the standard update has completed, the system will then wait for the user to validate that multi-pathing has recovered successfully. The system will log an event in the event log indicating that the update still needs to complete an extra step;
  3. The user should follow the Fix Procedure for the event to allow the second phase to start;
  4. In phase two, each node in the system is then warm started one at a time. This will take approximately 2 minutes per node plus an additional 30 minutes wait half way through. (e.g. 45 minutes for a system with 8 nodes or 4 control enclosures). The warm start on each node will initiate a host multi-path driver failover/failback;
  5. Once the second phase of the update has completed, the system will perform firmware updates to any enclosures attached to the system. On V7000 Generation 1 systems, if the battery firmware also needs to be upgraded, then the upgrade may remain in the enclosures_stalled state for up to seven hours, once this period has elapsed the upgrade will continue;
  6. When the firmware update has updated all of the enclosure components, the update is complete.
The GUI will provide step-by-step progress of this update, or it can be monitored using the new lsupdate command.
7.4.0.0
Sometime after systems running v7.3 or earlier are upgraded to v7.4 or later the SSL certificate, used by the system, to present the GUI to client browsers may become expired. As a consequence it will not be possible to access the GUI until a new SSL certificate has been generate on the system.

A new SSL certificate can be generated using the following CLI command:

svctask chsystem -regensslcert

7.4.0.0
Systems running v6.4.0 or earlier will be have to perform multiple updates to install release v7.5.0. Please see the appropriate update matrix for your product to determine the correct set of updates required. n/a
Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue. 7.4.0.0
Starting with v7.4.0.0, the $IFS special bash variable ("Internal Field Separator") is read-only. In previous releases, authenticated SSH users could change the $IFS variable, to help parse Command Line Interface (CLI) output which might contain spaces in object names. To avoid potential security issues, an attempt to change $IFS will fail with the message "rbash: IFS: readonly variable " Scripts which use $IFS will need to be refactored -- for example, they could process the CLI output on a host system, instead of doing so on the CLI itself. Alternatively, they could use the read -d option to parse CLI output with a particular field separator. 7.4.0.0
Using the software update test utility before installing an update is mandatory. Failing to successfully run the utility will result in the update failing with the following message:

CMMVC5996E The specific upgrade package cannot be installed over the current version

7.3.0.4
Updates from releases prior to v7.3.0, to v7.3.0 or later will disable the cache when the update starts and the cache will remain disabled until the update completes, at which time it will be automatically re-enabled. 7.3.0.0
Global Mirror relationships must be stopped when upgrading to v7.4.0 or later.

Refer to this flash for more information on this restriction

7.2.0.0
An automatic update may stall if the Enhanced Stretched System function is configured on a system with exactly four nodes and non-mirrored VDisks. It is therefore recommended to update such systems using the manual update procedure documented in the Knowledge Center.

This does not apply to conventional Stretched Systems or Enhanced Stretched Systems with two, six or eight nodes.

This also does not apply to Enhanced Stretched Systems that solely contain mirrored VDisks with a copy in both site 1 and site 2.

7.2.0.0
If using IP replication, please review the set of restrictions published in the Configuration Limits and Restrictions document for your product. 7.1.0.0
Windows 2008 host paths may become unavailable following a Storwize V7000 node canister or SAN Volume Controller node replacement procedure

Refer to this flash for more information on this restriction

6.4.0.0
Intra-System Global Mirror not supported

Refer to this flash for more information on this restriction

6.1.0.0
Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions, valid for previous PTFs, have now been lifted
V7000 Gen2 systems should not be upgraded to v7.5.0.12. 7.5.0.12
Systems running v7.5.0.7 will not be able to upgrade to v7.6.0.4 or earlier. This restriction will be lifted by a future v7.6 PTF. 7.5.0.7
Systems with 2,600 MDisks or more in a single cluster are not supported with v7.5.0.0 or v7.5.0.1. 7.5.0.0
For Systems using IP replication and running v7.4 or later. SVC/Storwize software is designed to throttle back the replication throughput if packet loss is seen on an IP link used for replication. After the throughput has been reduced, it will not be increased again. 7.4.0.0

3. Issues Resolved

This release contains all of the fixes included in the 7.4.0.4 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2018-1433 ssg1S1012263 7.5.0.14
CVE-2018-1434 ssg1S1012263 7.5.0.14
CVE-2018-1438 ssg1S1012263 7.5.0.14
CVE-2018-1461 ssg1S1012263 7.5.0.14
CVE-2018-1462 ssg1S1012263 7.5.0.14
CVE-2018-1463 ssg1S1012263 7.5.0.14
CVE-2018-1464 ssg1S1012263 7.5.0.14
CVE-2018-1465 ssg1S1012263 7.5.0.14
CVE-2018-1466 ssg1S1012263 7.5.0.14
CVE-2017-5638 ssg1S1010113 7.5.0.13
CVE-2018-11776 ibm10741137 7.5.0.13
CVE-2017-5647 ssg1S1010892 7.5.0.12
CVE-2016-2183 ssg1S1010205 7.5.0.12
CVE-2016-5546 ssg1S1010205 7.5.0.12
CVE-2016-5547 ssg1S1010205 7.5.0.12
CVE-2016-5548 ssg1S1010205 7.5.0.12
CVE-2016-5549 ssg1S1010205 7.5.0.12
CVE-2016-6563 ssg1S1009325 7.5.0.12
CVE-2016-6564 ssg1S1009325 7.5.0.12
CVE-2016-5385 ssg1S1009581 7.5.0.12
CVE-2016-5386 ssg1S1009581 7.5.0.12
CVE-2016-5387 ssg1S1009581 7.5.0.12
CVE-2016-5388 ssg1S1009581 7.5.0.12
CVE-2016-2177 ssg1S1010115 7.5.0.12
CVE-2016-2178 ssg1S1010115 7.5.0.12
CVE-2016-2183 ssg1S1010115 7.5.0.12
CVE-2016-6302 ssg1S1010115 7.5.0.12
CVE-2016-6304 ssg1S1010115 7.5.0.12
CVE-2016-6306 ssg1S1010115 7.5.0.12
CVE-2016-5696 ssg1S1010116 7.5.0.12
CVE-2016-2834 ssg1S1010117 7.5.0.12
CVE-2016-5285 ssg1S1010117 7.5.0.12
CVE-2016-8635 ssg1S1010117 7.5.0.12
CVE-2016-4461 ssg1S1010883 7.5.0.10
CVE-2016-4430 ssg1S1009282 7.5.0.10
CVE-2016-4431 ssg1S1009282 7.5.0.10
CVE-2016-4433 ssg1S1009282 7.5.0.10
CVE-2016-4436 ssg1S1009282 7.5.0.10
CVE-2016-3092 ssg1S1009284 7.5.0.10
CVE-2016-1978 ssg1S1009280 7.5.0.9
CVE-2016-2107 ssg1S1009281 7.5.0.9
CVE-2016-2108 ssg1S1009281 7.5.0.9
CVE-2016-1938 ssg1S1010118 7.5.0.9
CVE-2016-9074 ssg1S1010118 7.5.0.9
CVE-2017-6056 ssg1S1010022 7.5.0.8
CVE-2015-5174 ssg1S1005846 7.5.0.8
CVE-2015-5345 ssg1S1005846 7.5.0.8
CVE-2015-5346 ssg1S1005846 7.5.0.8
CVE-2015-5351 ssg1S1005846 7.5.0.8
CVE-2016-0706 ssg1S1005846 7.5.0.8
CVE-2016-0714 ssg1S1005846 7.5.0.8
CVE-2016-0763 ssg1S1005846 7.5.0.8
CVE-2016-0705 ssg1S1005848 7.5.0.8
CVE-2016-0797 ssg1S1005848 7.5.0.8
CVE-2016-0785 ssg1S1005849 7.5.0.8
CVE-2016-2162 ssg1S1005849 7.5.0.8
CVE-2015-7181 ssg1S1005668 7.5.0.7
CVE-2015-7182 ssg1S1005668 7.5.0.7
CVE-2015-7183 ssg1S1005668 7.5.0.7
CVE-2015-4872 ssg1S1005672 7.5.0.7
CVE-2015-3194 ssg1S1005669 7.5.0.7
CVE-2016-0475 ssg1S1005709 7.5.0.7
CVE-2015-2730 ssg1S1005576 7.5.0.5
CVE-2015-5209 ssg1S1005577 7.5.0.5
CVE-2015-3238 ssg1S1005581 7.5.0.5
CVE-2015-2613 ssg1S1005435 7.5.0.3
CVE-2015-2601 ssg1S1005435 7.5.0.3
CVE-2015-2625 ssg1S1005435 7.5.0.3
CVE-2015-1931 ssg1S1005435 7.5.0.3
CVE-2015-1789 ssg1S1005434 7.5.0.3
CVE-2015-1791 ssg1S1005434 7.5.0.3
CVE-2015-1788 ssg1S1005434 7.5.0.3
CVE-2015-1831 ssg1S1005335 7.5.0.2
CVE-2015-0488 ssg1S1005334 7.5.0.2
CVE-2015-2808 ssg1S1005334 7.5.0.2
CVE-2015-1916 ssg1S1005334 7.5.0.2
CVE-2015-0204 ssg1S1005334 7.5.0.2
CVE-2014-0227 ssg1S1005302 7.5.0.0
CVE-2014-0230 ssg1S1005302 7.5.0.0
CVE-2015-0286 ssg1S1005303 7.5.0.0
CVE-2015-4000 ssg1S1005274 7.5.0.0
CVE-2015-7575 ssg1S1005583 7.5.0.0

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
HU01767 All Critical Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash (show details)
Symptom Loss of Access to Data
Environment Systems running v7.8.0 or earlier
Trigger None
Workaround None
7.5.0.14 RAID, Thin Provisioning
HU01745 All Suggested testssl.sh identifies Logjam (CVE-2015-4000), fixed in v7.5.0.0, as a vulnerability (show details)
Symptom None
Environment Systems running v7.5.0 or earlier
Trigger Run testssl.sh v2.9.5 or later
Workaround None
7.5.0.14
HU00762 All High Importance Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node (show details)
Symptom Loss of Redundancy
Environment Systems running v7.3 or later
Trigger None
Workaround None
7.5.0.13 Cache
HU01498 All Suggested GUI may be exposed to CVE-2017-5638 (see Section 3.1) 7.5.0.13 Security
IT10470 V5000, V3700, V3500 Suggested Noisy/high speed fan (show details)
Symptom None
Environment V3000 & V5000 systems
Trigger None
Workaround Temporarily fixed using "chenclosurecanister -reset -canister" CLI command E.g. To reset canister 1 of enclosure 3 enter chenclosurecanister -reset -canister 1 3 Note: The command should only be used on one node canister at a time with a pause, of 30 minutes, between reset of node canisters in the same control enclosure. Customers running v7.5 code levels should be aware of HU00996
7.5.0.13
HU01555 V3700 High Importance The system may generate duplicate WWPNs (show details)
Symptom Configuration
Environment Storwize V3700 systems
Trigger None
Workaround None
7.5.0.12 Hosts
HU01391 & HU01581 V7000, V5000, V3700, V3500 Suggested Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware (show details)
Symptom Single Node Warmstart
Environment Storwize systems
Trigger None
Workaround None
7.5.0.12 Drives
HU00733 All HIPER Stop with access results in node warmstarts after a recovervdiskbysystem command (show details)
Symptom Multiple Node Warmstarts
Environment All
Trigger Stopping RC with -access. Remove nodes from I/O group. Run a recovervdisk command.
Workaround Avoid the use of recovervdisk when any remote copy is stopped with -access
7.5.0.11 Global Mirror
HU01245 All High Importance Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart (show details)
Symptom Multiple Node Warmstarts
Environment Systems using Global Mirror with Change Volumes
Trigger Add/remove a volume copy from a primary change volume
Workaround Avoid adding or removing primary change volumes while the relationship is in use
7.5.0.11 Global Mirror with Change Volumes
HU01353 All Suggested CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds (show details)
Symptom Configuration
Environment All systems
Trigger After cluster creation use the CLI to enter a carriage return in a command that allows free text in an argument
Workaround Do not insert a carriage return character into text being entered via CLI
7.5.0.11 Command Line Interface
HU00719 V3700, V3500 High Importance After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery (show details)
Symptom Multiple Node Warmstarts
Environment Storwize V3500 & V3700 systems
Trigger Power outage
Workaround None
7.5.0.10
HU01082 V7000, V5000, V3700, V3500 High Importance A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline (show details)
Symptom Loss of Access to Data
Environment Storwize systems
Trigger None
Workaround None
7.5.0.10 Hosts
HU01185 All High Importance iSCSI target closes connection when there is a mismatch in sequence number (show details)
Symptom Loss of Access to Data
Environment Systems with iSCSI connected hosts
Trigger Out of sequence I/O
Workaround None
7.5.0.10 iSCSI
IT16337 SVC, V7000, V5000 High Importance Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.4 or later using 16Gb HBAs
Trigger None
Workaround None
7.5.0.10
HU01000 All Suggested SNMP and Call Home stop working when a node reboots and the Ethernet link is down (show details)
Symptom None
Environment All systems
Trigger Node service IP offline prior to reboot
Workaround Use "chclusterip" command to manually set the config
7.5.0.10 System Monitoring
HU01212 All Suggested GUI displays an incorrect timezone description for Moscow (show details)
Symptom None
Environment All systems
Trigger None
Workaround Use CLI to track
7.5.0.10 Graphical User Interface
HU01227 All Suggested High volumes of events may cause the email notifications to become stalled (show details)
Symptom None
Environment Systems running v7.6 or later
Trigger More than 15 events per second
Workaround None
7.5.0.10 System Monitoring
HU01258 SVC Suggested A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration (show details)
Symptom None
Environment SVC systems running v7.4 or later in a stretched cluster configuration
Trigger Site/node failover
Workaround None
7.5.0.10 Compression
HU01447 All HIPER The management of FlashCopy grains during a restore process can miss some IOs (show details)
Symptom Data Integrity Loss
Environment Systems running v7.5 or later using FlashCopy
Trigger None
Workaround None
7.5.0.9 FlashCopy
HU00271 All High Importance An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems using GM
Trigger None
Workaround None
7.5.0.9 Global Mirror
HU00915 SVC, V7000, V5000, V3700 High Importance Loss of access to data when removing volumes associated with a GMCV relationship (show details)
Symptom Loss of Access to Data
Environment Systems using GMCV
Trigger Deleting a volume in a GMCV relationship, if the volume and the RC controlled FlashCopy map in this relationship have the same ID
Workaround Use chrcrelationship -nomasterchange or -noauxchange <REL> before removing volumes
7.5.0.9 Global Mirror with Change Volumes
HU01062 All High Importance Tier 2 recovery may occur when max replication delay is used and remote copy I/O is delayed (show details)
Symptom Loss of Access to Data
Environment Systems running v7.4 or later using Global Mirror
Trigger None
Workaround Set max_replication_delay value to 0
7.5.0.9 Global Mirror
HU01140 All High Importance EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance (show details)
Symptom Performance
Environment Systems running v7.3 or later using EasyTier
Trigger None
Workaround Add Enterprise-class drives to the MDisk or MDisk group that is experiencing unbalanced workloads
7.5.0.9 EasyTier
HU01141 All High Importance Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 or later using IP Replication
Trigger Enter mkippartnership CLI command
Workaround Ensure partner cluster IP can be pinged before issuing a mkippartnership CLI command
7.5.0.9 IP Replication
HU00649 V3700, V3500 Suggested In rare cases an unexpected IP address may be configured on management port eth0. This IP address is neither the service IP nor the cluster IP, but most likely set by DHCP during boot (show details)
Symptom None
Environment All V3500 and V3700 systems
Trigger None
Workaround None
7.5.0.9
HU00909 All Suggested Single node warmstart may occur when removing an MDisk group that was using EasyTier (show details)
Symptom Single Node Warmstart
Environment Systems running v7.4 or later using EasyTier
Trigger Removing an MDisk group
Workaround Disable EasyTier on the MDisk group before removing it
7.5.0.9 EasyTier
HU00928 V7000 Suggested For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed (show details)
Symptom None
Environment Storwize V7000 Gen 1 systems running v7.3 or later with large configurations
Trigger None
Workaround None
7.5.0.9 Drives
HU01024 V7000, V5000, V3700, V3500 Suggested A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip (show details)
Symptom Single Node Warmstart
Environment Systems running v7.4 or later
Trigger None
Workaround None
7.5.0.9
HU01042 SVC, V7000, V5000 Suggested Single node warmstart due to 16Gb HBA firmware behaviour (show details)
Symptom Single Node Warmstart
Environment Systems running v7.4 or later using 16Gb HBAs
Trigger None
Workaround None
7.5.0.9
HU01064 SVC, V7000, V5000, V3700 Suggested Management GUI incorrectly displays FC mappings that are part of GMCV relationships (show details)
Symptom None
Environment Systems running v7.4 or later
Trigger Node warmstart
Workaround None
7.5.0.9 Graphical User Interface
HU01074 All Suggested An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.4 or later
Trigger None
Workaround Avoid changes to the email notification feature
7.5.0.9 System Monitoring
HU01097 V5000, V3700, V3500 Suggested For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid (show details)
Symptom None
Environment Systems running v7.4 or later
Trigger None
Workaround None
7.5.0.9 Support Data Collection
HU01110 All Suggested Spectrum Virtualize supports SSH connections using RC4 based ciphers (show details)
Symptom None
Environment Systems running v7.5 or later
Trigger None
Workaround None
7.5.0.9
HU01144 V7000 Suggested Single node warmstart on the config node due to GUI contention (show details)
Symptom Single Node Warmstart
Environment V7000 Gen 2 systems running v7.5 or later
Trigger None
Workaround Disable GUI
7.5.0.9 Graphical User Interface
HU01240 All Suggested For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time (show details)
Symptom None
Environment Systems running v7.3 or later
Trigger No write I/O for >120 seconds
Workaround Ensure relevant volume receives at least one write I/O per 120 second interval
7.5.0.9
HU00990 All HIPER A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes (show details)
Symptom Loss of Access to Data
Environment Systems running v7.4 or later using Global Mirror
Trigger Node warmstart on the secondary cluster
Workaround Use GMCV
7.5.0.8 Global Mirror
HU01053 All Critical An issue in the drive automanage process during a replacement may result in a Tier 2 recovery (show details)
Symptom Loss of Access to Data
Environment Systems running v7.5 or later
Trigger Replacing a drive
Workaround Avoid the drive automanage process. Remove the failed drive and unmanage it, changing its use. Once it no longer appears in lsdrive then the new drive may be inserted and managed manually
7.5.0.8 Drives
HU01060 All High Importance Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 or later using FlashCopy
Trigger Prior node warmstarts
Workaround None
7.5.0.8 FlashCopy
HU01067 SVC, V7000, V5000 High Importance In a HyperSwap topology, where host I/O to a volume is being directed to both volume copies, for specific workload characteristics, I/O received within a small timing window could cause warmstarts on two nodes within separate I/O groups (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 or later using HyperSwap
Trigger None
Workaround If possible, change host settings so I/O is directed only to a single volume copy (i.e. a single I/O group) for each volume
7.5.0.8 HyperSwap
IT14917 All High Importance Node warmstarts due to a timing window in the cache component. For more details refer to this Flash (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.4 or later
Trigger None
Workaround None
7.5.0.8 Cache
HU00831 (reverted) All Suggested This APAR has been reverted in light of issues with the fix. This APAR will be re-applied in a future PTF 7.5.0.8 Cache
HU00927 All Suggested Single node warmstart may occur while fast formatting a volume (show details)
Symptom Single Node Warmstart
Environment Systems running v7.5
Trigger Create a new volume using fast format
Workaround None
7.5.0.8
HU00935 All Suggested A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time (show details)
Symptom Single Node Warmstart
Environment Systems running v7.3 or later using FlashCopy
Trigger None
Workaround None
7.5.0.8 FlashCopy
HU01019 All Suggested Customized grids view in GUI is not being returned after page refreshes (show details)
Symptom None
Environment Systems running v7.4 or later
Trigger Press F5 to refresh "volumes" (or "hosts") page with customized columns shown
Workaround Remove the URL characters after "#" and press enter to reload this page
7.5.0.8 Graphical User Interface
HU01028 SVC Suggested Processing of lsnodebootdrive output may adversely impact management GUI performance (show details)
Symptom None
Environment SVC systems running v7.5 and earlier
Trigger None
Workaround Restart tomcat service
7.5.0.8 Graphical User Interface
HU01030 All Suggested Incremental FlashCopy always requires a full copy (show details)
Symptom None
Environment Systems running v6.3 or later using FlashCopy
Trigger None
Workaround Remove the affected FC map from the consistency group and then add it back
7.5.0.8 FlashCopy
HU01046 SVC, V7000 Suggested Free capacity is tracked using a count of free extents. If a child pool is shrunk the counter can wrap causing incorrect free capacity to be reported (show details)
Symptom None
Environment SVC and V7000 systems running v7.5 or later
Trigger None
Workaround None
7.5.0.8 Storage Virtualisation
HU01052 SVC Suggested GUI operation with large numbers of volumes may adversely impact performance (show details)
Symptom Performance
Environment SVC systems with CG8 model nodes or older running v7.4 or later
Trigger Configurations with very high numbers of volumes
Workaround Disable GUI
7.5.0.8 Performance
HU01059 SVC, V7000, V5000, V3700 Suggested When a tier in a storage pool runs out of free extents EasyTier can adversely affect performance (show details)
Symptom Performance
Environment Systems running v7.5 or later with EasyTier enabled
Trigger A tier in a storage pool runs out of free extents
Workaround Ensure tiers do not run out of capacity
7.5.0.8 EasyTier
HU01070 All Suggested Increased preparation delay when FlashCopy Manager initiates a backup. This does not impact the performance of the associated data transfer. (show details)
Symptom None
Environment Systems running v7.3 or later using FlashCopy Manager
Trigger Initiate a backup with FlashCopy Manager
Workaround None
7.5.0.8 FlashCopy
HU01072 All Suggested In certain configurations throttling too much may result in dropped IOs, which can lead to a single node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.5 or later using Throttling
Trigger None
Workaround Disable throttling using chvdisk
7.5.0.8 Throttling
HU01080 All Suggested Single node warmstart due to an I/O timeout in cache (show details)
Symptom Single Node Warmstart
Environment Systems running v7.3 or later
Trigger None
Workaround None
7.5.0.8 Cache
HU01081 All Suggested When removing multiple nodes from a cluster a remaining node may warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.5 or later presenting volumes to VMware hosts
Trigger Issue two rmnode commands in rapid sequence
Workaround Pause between rmnode commands
7.5.0.8
HU01087 SVC, V7000, V5000, V3700 Suggested With a partnership stopped at the remote site the stop button, in the GUI, at the local site will be disabled (show details)
Symptom None
Environment Systems running v7.3 or later using remote copy partnerships
Trigger Stop partnership on remote cluster
Workaround Use CLI to stop the partnership
7.5.0.8 Global Mirror, Metro Mirror
HU01094 All Suggested Single node warmstart due to rare resource locking contention (show details)
Symptom Single Node Warmstart
Environment Systems running v7.5 or later presenting volumes to VMware hosts
Trigger None
Workaround None
7.5.0.8 Hosts
HU01096 SVC, V7000 Suggested Batteries may be seen to continuously recondition (show details)
Symptom None
Environment DH8 & V7000 systems
Trigger None
Workaround Replace battery
7.5.0.8
IC85931 All Suggested When the user is copying iostats files between nodes the automatic clean up process may occasionally result in an failure message (ID 980440) in the event log (show details)
Symptom None
Environment Systems monitored by bespoke performance tools
Trigger Copying iostats files between nodes
Workaround None
7.5.0.8
HU00819 All HIPER Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues (show details)
Symptom Loss of Access to Data
Environment Systems running v7.4 or later using Global Mirror
Trigger Connectivity issues between the primary and secondary cluster
Workaround Use a lower link tolerance setting to prevent host impact on the primary
7.5.0.7 Global Mirror
HU01051 All HIPER Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster (show details)
Symptom Performance
Environment Systems running v7.4 or later using a large number of Global Mirror relationships
Trigger None
Workaround None
7.5.0.7 Global Mirror
HU00891 All High Importance The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.3 or later using Thin Provisioning
Trigger None
Workaround None
7.5.0.7 Thin Provisioning
HU00923 SVC, V7000, V5000 High Importance Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters (show details)
Symptom Single Node Warmstart
Environment Systems using 16GB FC adapters
Trigger When error conditions are present in the fibre channel network
Workaround None
7.5.0.7 Reliability Availability Serviceability
HU01056 All High Importance Both nodes in the same I/O group warmstart when using vVols (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.6 or later that are using vVols
Trigger VMWare ESX generates an unsupported (i.e. non-zero) Select Report on Report LUNs
Workaround None
7.5.0.7 vVols
HU01058 All High Importance Multiple node warmstarts may occur when volumes that are part of FlashCopy maps go offline (e.g due to insufficient space) (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 using FlashCopy
Trigger FlashCopy target volumes go offline (e.g. SE target volumes run out of available space)
Workaround Prevent volumes, in FlashCopy maps, from going offline
7.5.0.7 FlashCopy
HU00756 SVC, V7000 Suggested Performance statistics BBCZ counter values reported incorrectly (show details)
Symptom None
Environment Systems using 16GB FC adapters
Trigger None
Workaround None
7.5.0.7 System Monitoring
HU00831 (reverted in 7.5.0.8) All Suggested Single node warmstart due to hung I/O caused by cache deadlock (show details)
Symptom Single Node Warmstart
Environment Systems running v7.2 or later
Trigger Hosts sending many large block IOs that use many credits per I/O
Workaround None
7.5.0.7 Cache
HU00975 All Suggested Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks (show details)
Symptom Single Node Warmstart
Environment Systems running v7.5 or later using FlashCopy
Trigger None
Workaround None
7.5.0.7 FlashCopy
HU01029 SVC Suggested Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI (show details)
Symptom Single Node Warmstart
Environment DH8 clusters running v7.3 or later
Trigger Replace boot drive with new, uninitialised drive. Attempt to login to service assistant as "superuser"
Workaround Send necessary service commands to the node from another node in the cluster. Put node into service and resync the drive
7.5.0.7
HU01073 SVC Suggested SVC CG8 nodes have internal SSDs but these are not displayed in internal storage page (show details)
Symptom None
Environment CG8 model systems running v7.5 or later
Trigger I/O group name contains '-'
Workaround Use "svcinfo lsdrive" to get internal SSDs status.
7.5.0.7 System Monitoring
HU01033 All High Importance After upgrade to v7.5.0.5 both nodes warmstart (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5.0.5
Trigger Upgrading to v7.5.0.5
Workaround None
7.5.0.6 System Update
HU00922 All Critical Loss of access to data when moving volumes to another I/O group using the GUI (show details)
Symptom Loss of Access to Data
Environment Systems with multiple I/O groups
Trigger Moving volumes between I/O groups using the GUI
Workaround Use the CLI to move volumes between I/O groups
7.5.0.5 Graphical User Interface
HU00980 SVC, V7000 Critical Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash (show details)
Symptom Loss of Access to Data
Environment Systems running v7.3 or later that are using compressed volumes
Trigger None
Workaround None
7.5.0.5 Compression
HU00740 All High Importance Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node (show details)
Symptom Performance
Environment Systems running v7.3 or later with EasyTier enabled that have many extents (>1 million)
Trigger High volume workloads
Workaround Disable EasyTier
7.5.0.5 EasyTier
HU00913 SVC, V7000, V5000, V3700 High Importance Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB (show details)
Symptom Multiple Node Warmstarts
Environment Systems using Metro Mirror, Global Mirror or Global Mirror with Change Volumes
Trigger I/O to a global mirror volume that is larger than 128TB
Workaround Keep volume size to less than 128TB when using replication services
7.5.0.5 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU00991 All High Importance Performance impact on read pre-fetch workloads (show details)
Symptom Performance
Environment Systems running v7.3 or later
Trigger Sequential read I/O workloads smaller than 32K
Workaround Tune application sequential read I/O to 32K or greater
7.5.0.5 Performance
II14778 All High Importance Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required (show details)
Symptom Performance
Environment Systems that have a large number of volumes (1000+)
Trigger Using the GUI when a large number of volumes have regularly changing attributes
Workaround Disable GUI
7.5.0.5 Performance
HU00890 V7000 Suggested Technician port inittool redirects to SAT GUI (show details)
Symptom Configuration
Environment OEM V7000 Gen 2 systems
Trigger Initialise cluster via the technician port
Workaround Use SAT GUI or USB stick to initialise the cluster
7.5.0.5 Graphical User Interface
HU00936 All Suggested During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline (show details)
Symptom Offline Volumes
Environment Systems running v7.3 or later that are using compressed volumes
Trigger None
Workaround None
7.5.0.5 Compression
HU00967 All Critical Multiple warmstarts due to FlashCopy background copy limitation putting both nodes in service state (show details)
Symptom Loss of Access to Data
Environment Systems running v7.5 or later using FlashCopy
Trigger FlashCopy
Workaround None
7.5.0.4 FlashCopy
HU00840 All High Importance Node warmstarts when Spectrum Virtualize iSCSI target receives garbled packets (show details)
Symptom Multiple Node Warmstarts
Environment Systems presenting volumes to VMware hosts
Trigger Corrupt iSCSI packets from host
Workaround Disable TSO
7.5.0.4 iSCSI
HU00499 SVC, V7000, V5000, V3700 HIPER Loss of access to data when a volume that is part of a Global Mirror Change Volumes relationship is removed with the force flag. (show details)
Symptom Loss of Access to Data
Environment Systems using Global Mirror with Change Volumes
Trigger Deleting a volume which is a secondary in a GMCV relationship and part of a consistency group.
Workaround Avoid the use of the 'rmvdisk -force' command when there are remote copy relationships attached, instead use 'svctask chrcrelationship -nonauxchange' followed by 'svctask rmvdisk'
7.5.0.3 Global Mirror with Change Volumes
HU00898 SVC, V7000 HIPER Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash (show details)
Symptom None
Environment Systems running v7.3 or later that are using compressed volumes
Trigger None
Workaround None
7.5.0.3 Compression
HU00809 SVC Critical Both nodes shutdown when power is lost to one node for more than 15 seconds (show details)
Symptom Loss of Access to Data
Environment SVC systems with CG8 model nodes or older that are running in Split Cluster configuration
Trigger AC power loss to one Node
Workaround None
7.5.0.3 Reliability, Availability and Serviceability
HU00841 V7000, V5000, V3700, V3500 Critical Multiple node warmstarts leading to loss of access to data when changing a volume throttle rate to a value of more than 10000 IOPs or 40MBps (show details)
Symptom Loss of Access to Data
Environment Systems running v7.5.0 or later
Trigger Changing the volume throttling attribute to a value of more than 10000 IOPs or 40MBps
Workaround Do not change volume throttle rate
7.5.0.3 Hosts
HU00904 SVC, V7000, V5000, V3700 Critical Multiple node warmstarts leading to loss of access to data when the link used for IP Replication experiences packet loss and the data transfer rate occasionally drops to zero (show details)
Symptom Loss of Access to Data
Environment Systems running v7.4.0.5 or v7.5.0.2 that are using IP Replication
Trigger When the IP link used for replication experiences packet loss and the data transfer rate occasionally drops to zero
Workaround None
7.5.0.3 IP Replication
HU00821 All High Importance Single node warmstart due to HBA firmware behaviour (show details)
Symptom Single Node Warmstart
Environment All
Trigger None
Workaround None
7.5.0.3
HU00828 All High Importance FlashCopies take a long time or do not complete when the background copy rate set is non-zero (show details)
Symptom Performance
Environment Systems running v7.5 or later using FlashCopy
Trigger FlashCopy operation when the background copy rate set is non-zero
Workaround Using the CLI, set all FlashCopy clean rates to 0 and then to a non-zero value
7.5.0.3 FlashCopy
HU00832 V3700 High Importance Automatic licensed feature activation fails for 6099 machine type (show details)
Symptom Configuration
Environment V3700 systems with 6099 machine type
Trigger Automatic feature activation using the GUI
Workaround Use manual license activation (via GUI or CLI)
7.5.0.3 Graphical User Interface
HU00833 All High Importance Single node warmstart when the mkhost cli command is run without the -iogrp flag (show details)
Symptom Single Node Warmstart
Environment Systems running v7.5.0 or later that have previously run V6.3.0 or earlier
Trigger Running mkhost CLI command without the '-iogrp' flag. Note. Host objects created using the GUI will not cause this issue.
Workaround Run the mkhost cli command with the '-iogrp' flag
7.5.0.3
HU00843 V7000 High Importance Single node warmstart when there is a high volume of ethernet traffic on link used for IP replication/iSCSI (show details)
Symptom Single Node Warmstart
Environment V7000 Generation 1 systems running v7.5.0 that are using IP Replication or ISCSI
Trigger Heavily used IP links
Workaround Reduce the amount of traffic on the link used for IP replication/iSCSI
7.5.0.3 IP Replication, iSCSI
HU00844 SVC, V5000, V3700 High Importance Multiple node warmstarts following installation of an additional SAS HIC (show details)
Symptom Multiple Node Warmstarts
Environment SVC systems with DH8 model nodes, V5000 + V3700 running v7.5.0 or later
Trigger When installing an additional SAS HIC
Workaround A workaround is possible, contact support for assistance
7.5.0.3 Reliability, Availability and Serviceability
HU00845 V3700 High Importance Trial licenses for licensed feature activation are not available (show details)
Symptom Configuration
Environment V3700 systems that were created on v7.4 or v7.5
Trigger None
Workaround None
7.5.0.3
HU00902 SVC, V7000, V5000, V3700 High Importance Starting a Global Mirror Relationship or Consistency Group fails after changing a relationship to not use Change Volumes (show details)
Symptom Configuration
Environment Systems using Global Mirror
Trigger This issue is triggered by changing relationships from Global Mirror with Change Volumes to simple Global Mirror (without Change Volumes) while the secondary volume is inconsistent (i.e.background copy is still in progress). This is achieved by setting the cycling mode of the relationship to "none".
Workaround Avoid setting the cycling mode to"none" if the secondary volume is inconsistent
7.5.0.3 Global Mirror
HU00905 All Suggested The serial number value displayed in the GUI node properties dialog is incorrect (show details)
Symptom None
Environment All
Trigger None
Workaround The correct Node serial number value can be displayed using cli command 'svcinfo lsnodevpd'
7.5.0.3 Graphical User Interface
HU00725 SVC, V7000, V5000, V3700 HIPER Loss of access to data when adding a Global Mirror Change Volume relationship to a consistency group on the primary site, when the secondary site does not have a secondary volume defined (show details)
Symptom Loss of Access to Data
Environment Systems using Global Mirror with Change Volumes
Trigger Adding a GMCV relationship to a consistency group on the primary site
Workaround Create a volume on the secondary cluster and associate the volume with the relationship before adding the relationship on the primary cluster
7.5.0.2 Global Mirror with Change Volumes
HU00816 SVC, V7000 HIPER Loss of access to data following upgrade to v7.5.0.0 or 7.5.0.1 when, i) the cluster has previously run release 6.1.0 or earlier at some point in its life span or ii) the cluster has 2,600 or more MDisks (show details)
Symptom Loss of Access to Data
Environment All
Trigger Upgrade to v7.5.0.0 or 7.5.0.1
Workaround IBM Support can provide tooling that will prevent this issue
7.5.0.2
HU00820 V7000, V5000, V3700, V3500 Critical Data integrity issue when using encrypted arrays. For more details refer to this Flash (show details)
Symptom None
Environment Systems running v7.4 or later which are using Encryption
Trigger Internal SAS network recovery actions
Workaround None
7.5.0.2 Backend Storage, Encryption
HU00745 SVC, V7000, V5000, V3700 High Importance IP Replication does not return to using full throughput following packet loss on IP link used for replication (show details)
Symptom Performance
Environment Systems using IP Replication
Trigger Packet loss on IP link
Workaround After the IP link is stabilised: Stop the IP partnership Start the IP partnership Stop all remote copy relationships Start all relationships
7.5.0.2 IP Replication
HU00825 V7000, V5000, V3700, V3500 High Importance Java exception error when using the Service Assistant GUI to complete an enclosure replacement procedure (show details)
Symptom Configuration
Environment All
Trigger When replacing an enclosure
Workaround Use the Service Assistant cli to complete enclosure replacement procedure
7.5.0.2 Graphical User Interface
HU00815 SVC, V7000, V5000, V3700 HIPER FlashCopy source and target volumes offline when FlashCopy maps are started. (show details)
Symptom Offline Volumes
Environment Systems using FlashCopy
Trigger If a FlashCopy map is started or a node is reset following upgrade to v7.5.0.0. Please note, if the system is upgraded from version 7.3 or below then the upgrade contains an additional "completion" step at the end that will warm start each node. This can trigger the issue as well.
Workaround Contact IBM Support for further details.
7.5.0.1 FlashCopy
HU00638 All Critical Multiple node warmstarts when there is high backend latency (show details)
Symptom Multiple Node Warmstarts
Environment All
Trigger None
Workaround None
7.5.0.0 Hosts
HU00671 V7000, V5000, V3700, V3500 Critical 1691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash (show details)
Symptom None
Environment Systems running v7.3 or later using FlashCopy
Trigger Start and stop of FlashCopy mappings when there are multiple FlashCopies of the same source volume
Workaround None
7.5.0.0 Backend Storage, FlashCopy
HU00764 All Critical Loss of access to data due to persistent reserve host registration keys exceeding the current supported value of 256 (show details)
Symptom Loss of Access to Data
Environment Systems with clustered hosts using SCSI-3 persistent reservation
Trigger Number of host reservation keys exceeds 256
Workaround Reduce the number of volume to host mappings where possible
7.5.0.0 Hosts
HU00804 V7000, V5000, V3700, V3500 Critical Loss of access to data due to SAS recovery mechanism operating on both nodes in I/O group simultaneously (show details)
Symptom Loss of Access to Data
Environment All
Trigger None
Workaround None
7.5.0.0
HU00811 SVC, V7000, V5000 Critical Loss of access to data when SAN connectivity problems leads to backend controller being detected as incorrect type (show details)
Symptom Loss of Access to Data
Environment Systems virtualising external storage controllers
Trigger SAN connectivity problems
Workaround None
7.5.0.0 Backend Storage
HU00281 All High Importance Single node warmstart due to internal code exception (show details)
Symptom Single Node Warmstart
Environment All
Trigger None
Workaround None
7.5.0.0
HU00519 SVC, V7000, V5000, V3700 High Importance Node warmstart due to FlashCopy deadlock condition (show details)
Symptom Single Node Warmstart
Environment Systems using FlashCopy
Trigger Starting and stopping a FlashCopy mapping
Workaround None
7.5.0.0 FlashCopy
HU00644 All High Importance Multiple node warmstarts when node port receives duplicate frames during a specific I/O timing window (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.3 or later
Trigger Node receives duplicate Fibre Channel frames
Workaround Stop the host/SAN from sending duplicate frames
7.5.0.0
HU00659 SVC, V7000, V5000, V3700 High Importance Global Mirror with Change Volumes freeze time reported incorrectly (show details)
Symptom None
Environment Systems running v7.4 or later that are using Global Mirror with Change Volumes
Trigger None
Workaround None
7.5.0.0 Global Mirror with Change Volumes
HU00673 V7000, V5000, V3700, V3500 High Importance Drive slot is not recognised following drive auto manage procedure (show details)
Symptom Configuration
Environment Systems running v7.4 or later
Trigger Disk drive recovery action using 'drive auto manage' procedure
Workaround None
7.5.0.0 System Monitoring
HU00675 All High Importance Node warmstart following node start up/restart due to invalid CAW domain state (show details)
Symptom Single Node Warmstart
Environment Systems running v7.4 or later with VMware hosts using VAAI CAW feature
Trigger Following restart of a node
Workaround None
7.5.0.0
HU00726 SVC, V7000, V5000 High Importance Single node warmstart due to stuck I/O following offline MDisk group condition (show details)
Symptom Single Node Warmstart
Environment Systems running v7.3 or later that are virtualising external storage controllers
Trigger Offline MDisk group (when MDisks are from an external storage controller)
Workaround Prevent external conditions that can take an MDisk group offline
7.5.0.0 Storage Virtualisation
HU00735 All High Importance Host I/O statistics incorrectly including logically failed writes (show details)
Symptom None
Environment All
Trigger None
Workaround None
7.5.0.0 System Monitoring
HU00806 SVC, V7000 High Importance mkarray command fails when creating an encrypted array due to pending bitmap state (show details)
Symptom Configuration
Environment Systems running v7.4 or later using encryption
Trigger Creating an encrypted array
Workaround Resetting each node in cluster will reset the bitmap state and allow the mkarray command to complete successfully
7.5.0.0 Encryption
HU00807 SVC, V7000, V5000, V3700 High Importance Increase in node cpu usage due to FlashCopy mappings with high cleaning rate (show details)
Symptom Performance
Environment Systems using FlashCopy
Trigger Large number of concurrent FlashCopy mappings with high cleaning rate
Workaround Lower the clean rate on FlashCopy mappings
7.5.0.0 FlashCopy
HU00468 V7000, V5000, V3700, V3500 Suggested Drive firmware task not removed from GUI running tasks display following completion of drive firmware update action (show details)
Symptom None
Environment Systems running v7.3 or later
Trigger Drive firmware update
Workaround Issue a new applydrivesoftware command to a drive that does not exist
7.5.0.0 Graphical User Interface
HU00525 All Suggested Unable to manually mark monitoring events in the event log as fixed (show details)
Symptom None
Environment All
Trigger None
Workaround None required, monitoring events will be fixed automatically after 25 hours have passed
7.5.0.0 System Monitoring
HU00737 All Suggested GUI does not warn of lack of space condition when collecting a Snap, this results in some files missing from the Snap (show details)
Symptom None
Environment All
Trigger Collecting a Snap when insufficient space exists in /dumps directory
Workaround Using cli to collect Snap will warn of low space condition
7.5.0.0 Graphical User Interface
HU00805 V7000, V5000, V3700, V3500 Suggested Some SAS ports are displayed in hexadecimal values instead of decimal values in the performance statistics xml files (show details)
Symptom None
Environment All
Trigger None
Workaround None
7.5.0.0 System Monitoring
HU00808 All Suggested NTP trace logs not collected on configuration node (show details)
Symptom None
Environment All
Trigger None
Workaround None
7.5.0.0 Support Data Collection
IC92356 V7000 Suggested Improve DMP for handling 2500 event for V7000 using Unified storage (show details)
Symptom None
Environment V7000 systems using Unified storage
Trigger 2500 Event
Workaround None
7.5.0.0 Graphical User Interface

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Supported Drive Types and Firmware Levels
SAN Volume Controller and Storwize Family Inter-cluster Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning