HU01479 |
All |
HIPER
|
The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later using DRAID |
Trigger |
Reseat a failed drive |
Workaround |
Rather than reseating the drive use the CLI or GUI to fail then unfail it |
|
7.6.1.8 |
Distributed RAID |
HU01505 |
All |
HIPER
|
A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Drive with many errors |
Workaround |
Bring drive online |
|
7.6.1.8 |
Backend Storage, RAID |
HU01490 |
All |
Critical
|
When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
- A addhostport command with iqn2 and iqn1 (where iqn1 is already recorded) is entered;
- This command attempts to add iqn2 but determines that iqn1 is a duplicate and the CLI command fails;
- Later whenever a login request from iqn2 is received internal checking detects an inconsistency and warmstarts the node
|
Workaround |
Do not use multiple IQNs in iSCSI add/remove commands |
|
7.6.1.8 |
iSCSI |
HU01549 |
All |
Critical
|
During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5 or earlier with iSCSI connected HyperV clustered hosts |
Trigger |
Upgrade to v7.6 or later |
Workaround |
None |
|
7.6.1.8 |
System Update, iSCSI |
HU01572 |
All |
Critical
|
SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
iSCSI |
HU01480 |
All |
High Importance
|
Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI
(show details)
Symptom |
Configuration |
Environment |
Systems using IPv6 cluster addresses |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
Command Line Interface, Graphical User Interface |
HU01488 |
V7000, V5000 |
High Importance
|
SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures
(show details)
Symptom |
Loss of Redundancy |
Environment |
Storwize V5000 and V7000 systems running v7.7 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
Drives |
HU01503 |
All |
High Importance
|
When the 3PAR host type is set to legacy the round robin algorithm, used to select the MDisk port for I/O submission to 3PAR controllers, does not work correctly and I/O may be submitted to fewer controller ports, adversely affecting performance
(show details)
Symptom |
Performance |
Environment |
Systems running v7.5.0.10 v7.6.1.6 or v7.7.1.5 that are virtualising 3PAR storage subsystems |
Trigger |
Host persona set to 6 (legacy) on 3PAR controller |
Workaround |
Change 3PAR host persona to 2 (ALUA) instead of 6 (legacy) |
|
7.6.1.8 |
Backend Storage |
HU01506 |
All |
High Importance
|
Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All systems |
Trigger |
None |
Workaround |
Do not use the -autodelete option when creating a volume copy |
|
7.6.1.8 |
Volume Mirroring |
HU01569 |
SVC |
High Importance
|
When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes
(show details)
Symptom |
Performance |
Environment |
SVC systems using compression |
Trigger |
High compression workloads |
Workaround |
None |
|
7.6.1.8 |
Compression |
HU01609 & IT15343 |
All |
High Importance
|
When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using compression |
Trigger |
None |
Workaround |
Reduce compression workload |
|
7.6.1.8 |
Compression |
IT19726 |
SVC |
High Importance
|
Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
SVC systems |
Trigger |
SAN congestion |
Workaround |
None |
|
7.6.1.8 |
Hosts |
HU01098 |
All |
Suggested
|
Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later attached to backend controllers running older code levels |
Trigger |
Issue CLI command detectmdisk |
Workaround |
None |
|
7.6.1.8 |
Backend Storage |
HU01228 |
All |
Suggested
|
Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
Reliability Availability Serviceability |
HU01332 |
All |
Suggested
|
Performance monitor and Spectrum Control show zero CPU utilisation for compression
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
System Monitoring |
HU01391 & HU01581 |
V7000, V5000, V3700, V3500 |
Suggested
|
Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
(show details)
Symptom |
Single Node Warmstart |
Environment |
Storwize systems |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
Drives |
HU01430 |
V7000, V5000, V3700, V3500 |
Suggested
|
Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Storwize Gen 1 systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.8 |
|
HU01193 |
All |
HIPER
|
A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
Drive failure |
Workaround |
None |
|
7.6.1.7 |
Distributed RAID |
HU01447 |
All |
HIPER
|
The management of FlashCopy grains during a restore process can miss some IOs
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v7.5 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
FlashCopy |
HU01225 & HU01330 & HU01412 |
All |
Critical
|
Node warmstarts due to inconsistencies arising from the way cache interacts with compression
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.3 or later with compressed volumes |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Cache, Compression |
HU01410 |
SVC |
Critical
|
An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
FlashCopy |
HU01499 |
All |
Critical
|
When an offline volume copy comes back online, under rare conditions, the flushing process can cause the cache to enter an invalid state, delaying I/O, and resulting in node warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.3 or later using Volume Mirroring |
Trigger |
Offline volume copy comes online |
Workaround |
None |
|
7.6.1.7 |
Volume Mirroring, Cache |
HU01783 |
All |
Critical
|
Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
Replacing a drive with one of less capacity |
Workaround |
Ensure replacement drives are the same capacity |
|
7.6.1.7 |
Distributed RAID |
HU00762 |
All |
High Importance
|
Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.3 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Cache |
HU01254 |
SVC |
High Importance
|
A fluctuation of input AC power can cause a 584 error on a node
(show details)
Symptom |
Loss of Redundancy |
Environment |
SVC CF8/CG8 systems |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Reliability Availability Serviceability |
HU01262 |
All |
High Importance
|
Cached data for a HyperSwap volume may only be destaged from a single node in an I/O group
(show details)
Symptom |
Performance |
Environment |
Systems running v7.6 using HyperSwap |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
HyperSwap |
HU01402 |
V7000 |
High Importance
|
Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available
(show details)
Symptom |
Loss of Redundancy |
Environment |
V7000 Gen 1 systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Reliability Availability Serviceability |
HU01409 |
All |
High Importance
|
Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems connected to Cisco Nexus 3000 switches |
Trigger |
Config node fail over |
Workaround |
None |
|
7.6.1.7 |
Reliability Availability Serviceability |
IT14917 |
All |
High Importance
|
Node warmstarts due to a timing window in the cache component. For more details refer to this Flash
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Cache |
HU00831 |
All |
Suggested
|
Single node warmstart due to hung I/O caused by cache deadlock
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.2 or later |
Trigger |
Hosts sending many large block IOs that use many credits per I/O |
Workaround |
None |
|
7.6.1.7 |
Cache |
HU01022 |
SVC, V7000 |
Suggested
|
Fibre channel adapter encountered a bit parity error resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Hosts |
HU01247 |
All |
Suggested
|
When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
Stop the same FlashCopy consistency group twice in rapid succession using -force on second attempt |
Workaround |
Avoid stopping the same flash copy consistency group more than once |
|
7.6.1.7 |
FlashCopy |
HU01399 |
All |
Suggested
|
For certain config nodes the CLI Help commands may not work
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Use Knowledge Center |
|
7.6.1.7 |
Command Line Interface |
HU01432 |
All |
Suggested
|
Node warmstart due to an accounting issue within the cache component
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.7 |
Cache |
HU00719 |
V3700, V3500 |
High Importance
|
After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Storwize V3500 & V3700 systems |
Trigger |
Power outage |
Workaround |
None |
|
7.6.1.6 |
|
HU01109 |
SVC, V7000, V5000 |
High Importance
|
Multiple nodes can experience a lease expiry when a FC port is having communications issues
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.6 |
Reliability Availability Serviceability |
HU01221 |
SVC, V7000, V5000 |
High Importance
|
Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.6 |
|
HU01226 |
All |
High Importance
|
Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later using Global Mirror |
Trigger |
Changing max replication delay to a small non-zero number |
Workaround |
Do not change max replication delay to below 30 seconds |
|
7.6.1.6 |
Global Mirror |
HU01245 |
All |
High Importance
|
Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Global Mirror with Change Volumes |
Trigger |
Add/remove a volume copy from a primary change volume |
Workaround |
Avoid adding or removing primary change volumes while the relationship is in use |
|
7.6.1.6 |
Global Mirror with Change Volumes |
IT16012 |
SVC |
High Importance
|
Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance
(show details)
Symptom |
Performance |
Environment |
Systems running v7.3 or later |
Trigger |
Internal node boot drive RAID scrub process |
Workaround |
Try to avoid performing high I/O workloads (including copy services) at 1am on Sundays |
|
7.6.1.6 |
Performance |
HU01050 |
All |
Suggested
|
DRAID rebuild incorrectly reports event code 988300
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.6 |
Distributed RAID |
HU01063 |
SVC, V7000, V5000 |
Suggested
|
3PAR controllers do not support OTUR commands resulting in device port exclusions
(show details)
Symptom |
None |
Environment |
Systems virtualising 3PAR storage |
Trigger |
None |
Workaround |
None |
|
7.6.1.6 |
Backend Storage |
HU01187 |
All |
Suggested
|
Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times
(show details)
Symptom |
Performance |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
Avoid R5 array configurations |
|
7.6.1.6 |
|
HU01219 |
SVC, V7000, V5000 |
Suggested
|
Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.6 |
|
HU01234 |
All |
Suggested
|
After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI
(show details)
Symptom |
None |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Upgrade to v7.6 or later from v7.5 or earlier |
Workaround |
None |
|
7.6.1.6 |
iSCSI |
HU01251 |
V7000, V5000, V3700, V3500 |
Suggested
|
When following the DMP for a 1685 event, if the option for drive reseat has already been attempted is selected, the process to replace a drive is not started
(show details)
Symptom |
None |
Environment |
Storwize systems running v7.3 or later |
Trigger |
DMP for a 1685 event is run. Select "drive reseat has already been attempted" option |
Workaround |
Manually replace drive |
|
7.6.1.6 |
GUI Fix Procedure |
HU01258 |
SVC |
Suggested
|
A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
(show details)
Symptom |
None |
Environment |
SVC systems running v7.4 or later in a stretched cluster configuration |
Trigger |
Site/node failover |
Workaround |
None |
|
7.6.1.6 |
Compression |
HU01353 |
All |
Suggested
|
CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds
(show details)
Symptom |
Configuration |
Environment |
All systems |
Trigger |
After cluster creation use the CLI to enter a carriage return in a command that allows free text in an argument |
Workaround |
Do not insert a carriage return character into text being entered via CLI |
|
7.6.1.6 |
Command Line Interface |
HU00271 |
All |
High Importance
|
An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using GM |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
Global Mirror |
HU01078 |
SVC |
High Importance
|
When the rmnode command is run it removes persistent reservation data to prevent a stuck reservation. MS Windows and Hyper-V cluster design constantly monitors the reservation table and takes the associated volume offline whilst recovering cluster membership. This can result in a brief outage at the host level.
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems running v7.3 or later with Microsoft Windows or Hyper-V clustered hosts |
Trigger |
rmnode or chnodehw commands |
Workaround |
None |
|
7.6.1.5 |
|
HU01082 |
V7000, V5000, V3700, V3500 |
High Importance
|
A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize systems |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
Hosts |
HU01140 |
All |
High Importance
|
EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance
(show details)
Symptom |
Performance |
Environment |
Systems running v7.3 or later using EasyTier |
Trigger |
None |
Workaround |
Add Enterprise-class drives to the MDisk or MDisk group that is experiencing unbalanced workloads |
|
7.6.1.5 |
EasyTier |
HU01141 |
All |
High Importance
|
Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using IP Replication |
Trigger |
Enter mkippartnership CLI command |
Workaround |
Ensure partner cluster IP can be pinged before issuing a mkippartnership CLI command |
|
7.6.1.5 |
IP Replication |
HU01182 |
SVC, V7000, V5000 |
High Importance
|
Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
16G FC HBA receives a SCSI TUR command with Total XFER LEN > 0 |
Workaround |
None |
|
7.6.1.5 |
|
HU01183 |
SVC, V7000, V5000 |
High Importance
|
Node warmstart due to 16Gb HBA firmware entering a rare deadlock condition in its ELS frame handling
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
|
HU01184 |
All |
High Importance
|
When removing multiple MDisks node warmstarts may occur
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later |
Trigger |
Multiple rmmdisk commands issued in rapid succession |
Workaround |
Remove MDisks one at a time and let migration complete before proceeding to next MDisk removal |
|
7.6.1.5 |
|
HU01185 |
All |
High Importance
|
iSCSI target closes connection when there is a mismatch in sequence number
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Out of sequence I/O |
Workaround |
None |
|
7.6.1.5 |
iSCSI |
HU01210 |
SVC |
High Importance
|
A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster
(show details)
Symptom |
Loss of Redundancy |
Environment |
CG8 systems running v7.6.1 or later |
Trigger |
Broken or disabled TPM |
Workaround |
None |
|
7.6.1.5 |
|
HU01223 |
All |
High Importance
|
The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.3 or later using MetroMirror |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
Metro Mirror |
IT16337 |
SVC, V7000, V5000 |
High Importance
|
Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
|
HU00928 |
V7000 |
Suggested
|
For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed
(show details)
Symptom |
None |
Environment |
Storwize V7000 Gen 1 systems running v7.3 or later with large configurations |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
Drives |
HU01017 |
All |
Suggested
|
The result of CLI commands are sometimes not promptly presented in the GUI
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
In the GUI, navigate to "GUI Preferences" ("GUI Preferences->General" in v7.6 or later) and refresh the GUI cache. |
|
7.6.1.5 |
Graphical User Interface |
HU01024 |
V7000, V5000, V3700, V3500 |
Suggested
|
A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
|
HU01074 |
All |
Suggested
|
An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
Avoid changes to the email notification feature |
|
7.6.1.5 |
System Monitoring |
HU01089 |
All |
Suggested
|
svcconfig backup fails when an I/O group name contains a hyphen
(show details)
Symptom |
None |
Environment |
Systems running v7.4 or later |
Trigger |
I/O Group Name contains a hyphen character |
Workaround |
Amend I/O Group Name to a string that does not contain hyphen, dot or white-space characters |
|
7.6.1.5 |
|
HU01097 |
V5000, V3700, V3500 |
Suggested
|
For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid
(show details)
Symptom |
None |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
Support Data Collection |
HU01110 |
All |
Suggested
|
Spectrum Virtualize supports SSH connections using RC4 based ciphers
(show details)
Symptom |
None |
Environment |
Systems running v7.5 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
|
HU01178 |
V7000 |
Suggested
|
Battery incorrectly reports zero percent charged
(show details)
Symptom |
None |
Environment |
V7000 Gen 1 systems running v7.6.1 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
|
HU01194 |
All |
Suggested
|
A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later that are using vVols |
Trigger |
None |
Workaround |
None |
|
7.6.1.5 |
vVols |
HU01198 |
All |
Suggested
|
Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later using Comprestimator |
Trigger |
Run svctask analyzevdiskbysystem |
Workaround |
Avoid using svctask analyzevdiskbysystem |
|
7.6.1.5 |
Comprestimator |
HU01212 |
All |
Suggested
|
GUI displays an incorrect timezone description for Moscow
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Use CLI to track |
|
7.6.1.5 |
Graphical User Interface |
HU01214 |
All |
Suggested
|
GUI and snap missing EasyTier heatmap information
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
Download the files individually via CLI |
|
7.6.1.5 |
Support Data Collection |
HU01227 |
All |
Suggested
|
High volumes of events may cause the email notifications to become stalled
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
More than 15 events per second |
Workaround |
None |
|
7.6.1.5 |
System Monitoring |
HU01240 |
All |
Suggested
|
For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time
(show details)
Symptom |
None |
Environment |
Systems running v7.3 or later |
Trigger |
No write I/O for >120 seconds |
Workaround |
Ensure relevant volume receives at least one write I/O per 120 second interval |
|
7.6.1.5 |
|
HU00990 |
All |
HIPER
|
A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.4 or later using Global Mirror |
Trigger |
Node warmstart on the secondary cluster |
Workaround |
Use GMCV |
|
7.6.1.4 |
Global Mirror |
HU01181 |
All |
HIPER
|
Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v6.4 or later that are using compressed volumes |
Trigger |
Compressed volumes larger than 96TiB |
Workaround |
Limit compressed volume capacity to 96TiB |
|
7.6.1.4 |
Compression |
HU01060 |
All |
High Importance
|
Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using FlashCopy |
Trigger |
Prior node warmstarts |
Workaround |
None |
|
7.6.1.4 |
FlashCopy |
HU01165 |
All |
High Importance
|
When a SE volume goes offline both nodes may experience multiple warmstarts and go to service state
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later |
Trigger |
A space efficient volume going offline due to lack of free capacity |
Workaround |
None |
|
7.6.1.4 |
Thin Provisioning |
HU01180 |
All |
High Importance
|
When creating a snapshot on an ESX host, using vVols, a Tier 2 recovery may occur
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later |
Trigger |
Using the snapshot functionality in an ESX host on a vVol volume with insufficient FlashCopy bitmap capacity |
Workaround |
Ensure FlashCopy bitmap space is sufficient (use lsiogrp to determine and chiogrp to change) |
|
7.6.1.4 |
Hosts, vVols |
HU01046 |
SVC, V7000 |
Suggested
|
Free capacity is tracked using a count of free extents. If a child pool is shrunk the counter can wrap causing incorrect free capacity to be reported
(show details)
Symptom |
None |
Environment |
SVC and V7000 systems running v7.5 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.4 |
Storage Virtualisation |
HU01072 |
All |
Suggested
|
In certain configurations throttling too much may result in dropped IOs, which can lead to a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later using Throttling |
Trigger |
None |
Workaround |
Disable throttling using chvdisk |
|
7.6.1.4 |
Throttling |
HU01096 |
SVC, V7000 |
Suggested
|
Batteries may be seen to continuously recondition
(show details)
Symptom |
None |
Environment |
DH8 & V7000 systems |
Trigger |
None |
Workaround |
Replace battery |
|
7.6.1.4 |
|
HU01104 |
SVC, V7000, V5000, V3700 |
Suggested
|
When using GMCV relationships if a node in an I/O group loses communication with its partner it may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Global Mirror with Change Volumes |
Trigger |
Node loses communication with partner |
Workaround |
Create a GM relationship using small volumes in each I/O group |
|
7.6.1.4 |
Global Mirror with Change Volumes |
HU01142 |
SVC, V7000, V5000 |
Suggested
|
Single node warmstart due to 16Gb HBA firmware receiving invalid FC frames
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
16G FC HBA receives invalid BA_ACC frame |
Workaround |
None |
|
7.6.1.4 |
|
HU01143 |
All |
Suggested
|
Where nodes are missing config files some services will be prevented from starting
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
Warmstart the config node |
|
7.6.1.4 |
|
HU01144 |
V7000 |
Suggested
|
Single node warmstart on the config node due to GUI contention
(show details)
Symptom |
Single Node Warmstart |
Environment |
V7000 Gen 2 systems running v7.5 or later |
Trigger |
None |
Workaround |
Disable GUI |
|
7.6.1.4 |
Graphical User Interface |
IT15366 |
All |
Suggested
|
CLI command lsportsas may show unexpected port numbering
(show details)
Symptom |
None |
Environment |
Systems running v7.6.1 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.4 |
|
HU01118 |
V7000, V5000, V3700, V3500 |
HIPER
|
Due to a firmware issue both nodes in a V7000 Gen 2 may be powered off
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize V3500, V3700, V5000 & V7000 Gen 2 systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.3 |
|
HU01053 |
All |
Critical
|
An issue in the drive automanage process during a replacement may result in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5 or later |
Trigger |
Replacing a drive |
Workaround |
Avoid the drive automanage process. Remove the failed drive and unmanage it, changing its use. Once it no longer appears in lsdrive then the new drive may be inserted and managed manually |
|
7.6.1.3 |
Drives |
HU01016 & HU01088 |
All |
High Importance
|
Node warmstarts can occur when a port scan is received on port 1260
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later |
Trigger |
Invalid SSL connection to port 1260 |
Workaround |
Exclude port 1260 on systems from port scanning |
|
7.6.1.3 |
|
HU01090 |
All |
High Importance
|
Dual node warmstart due to issue with the call home process
(show details)
Symptom |
Loss of Access to Data |
Environment |
V9000 systems running v7.5 or later using call home |
Trigger |
None |
Workaround |
Disable call home |
|
7.6.1.3 |
|
HU01091 |
All |
High Importance
|
An issue with the CAW lock processing, under high SCSI-2 reservation workloads, may cause node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later presenting volumes to VMware hosts |
Trigger |
High SCSI-2 reservation workload |
Workaround |
Configure hosts to use VAAI reservations |
|
7.6.1.3 |
Hosts |
HU01112 |
SVC |
High Importance
|
When upgrading, the quorum lease times are not updated correctly which may cause lease expiries on both nodes
(show details)
Symptom |
Loss of Access to Data |
Environment |
V9000 systems running v7.6 or later |
Trigger |
Upgrade path 741x -> 751x -> 760 |
Workaround |
Update each quorum by running the chquorum command |
|
7.6.1.3 |
|
IT14922 |
All |
High Importance
|
A memory issue, related to the email feature, may cause nodes to warmstart or go offline
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
Stop using email temporarily |
|
7.6.1.3 |
|
HU01028 |
SVC |
Suggested
|
Processing of lsnodebootdrive output may adversely impact management GUI performance
(show details)
Symptom |
None |
Environment |
SVC systems running v7.5 and earlier |
Trigger |
None |
Workaround |
Restart tomcat service |
|
7.6.1.3 |
Graphical User Interface |
HU01030 |
All |
Suggested
|
Incremental FlashCopy always requires a full copy
(show details)
Symptom |
None |
Environment |
Systems running v6.3 or later using FlashCopy |
Trigger |
None |
Workaround |
Remove the affected FC map from the consistency group and then add it back |
|
7.6.1.3 |
FlashCopy |
HU01042 |
SVC, V7000, V5000 |
Suggested
|
Single node warmstart due to 16Gb HBA firmware behaviour
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.6.1.3 |
|
HU01052 |
SVC |
Suggested
|
GUI operation with large numbers of volumes may adversely impact performance
(show details)
Symptom |
Performance |
Environment |
SVC systems with CG8 model nodes or older running v7.4 or later |
Trigger |
Configurations with very high numbers of volumes |
Workaround |
Disable GUI |
|
7.6.1.3 |
Performance |
HU01059 |
SVC, V7000, V5000, V3700 |
Suggested
|
When a tier in a storage pool runs out of free extents EasyTier can adversely affect performance
(show details)
Symptom |
Performance |
Environment |
Systems running v7.5 or later with EasyTier enabled |
Trigger |
A tier in a storage pool runs out of free extents |
Workaround |
Ensure tiers do not run out of capacity |
|
7.6.1.3 |
EasyTier |
HU01064 |
SVC, V7000, V5000, V3700 |
Suggested
|
Management GUI incorrectly displays FC mappings that are part of GMCV relationships
(show details)
Symptom |
None |
Environment |
Systems running v7.4 or later |
Trigger |
Node warmstart |
Workaround |
None |
|
7.6.1.3 |
Graphical User Interface |
HU01076 |
All |
Suggested
|
Where hosts share volumes using a particular reservation method, if the maximum number of reservations is exceeded, this may result in a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later |
Trigger |
Exceed maximum number of SCSI reservations for a volume |
Workaround |
Verify all hosts sharing volumes are using the correct reservation method |
|
7.6.1.3 |
Hosts |
HU01080 |
All |
Suggested
|
Single node warmstart due to an I/O timeout in cache
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.3 |
Cache |
HU01081 |
All |
Suggested
|
When removing multiple nodes from a cluster a remaining node may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later presenting volumes to VMware hosts |
Trigger |
Issue two rmnode commands in rapid sequence |
Workaround |
Pause between rmnode commands |
|
7.6.1.3 |
|
HU01087 |
SVC, V7000, V5000, V3700 |
Suggested
|
With a partnership stopped at the remote site the stop button, in the GUI, at the local site will be disabled
(show details)
Symptom |
None |
Environment |
Systems running v7.3 or later using remote copy partnerships |
Trigger |
Stop partnership on remote cluster |
Workaround |
Use CLI to stop the partnership |
|
7.6.1.3 |
Global Mirror, Metro Mirror |
HU01092 |
All |
Suggested
|
Systems which have undergone particular upgrade paths may be blocked from upgrading to v7.6
(show details)
Symptom |
None |
Environment |
Systems running v7.5 and earlier |
Trigger |
Upgrade to v7.6.0 or later |
Workaround |
None |
|
7.6.1.3 |
System Update |
HU01094 |
All |
Suggested
|
Single node warmstart due to rare resource locking contention
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later presenting volumes to VMware hosts |
Trigger |
None |
Workaround |
None |
|
7.6.1.3 |
Hosts |
HU01100 |
All |
Suggested
|
License information not showing on GUI after upgrade to 7.6.0.3
(show details)
Symptom |
None |
Environment |
Systems running v7.6.0.3 or later |
Trigger |
Upgrade to v7.6.0.3 or later |
Workaround |
Access licence info using CLI |
|
7.6.1.3 |
|
HU01051 |
All |
HIPER
|
Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster
(show details)
Symptom |
Performance |
Environment |
Systems running v7.4 or later using a large number of Global Mirror relationships |
Trigger |
None |
Workaround |
None |
|
7.6.1.1 |
Global Mirror |
HU01103 |
SVC, V7000 |
HIPER
|
A specific drive type may insufficiently report media events causing a delay to failure handling
(show details)
Symptom |
Data Integrity Loss |
Environment |
SVC & V7000 Gen 1 systems using drive type HK230041S |
Trigger |
None |
Workaround |
None |
|
7.6.1.1 |
Backend Storage |
HU01062 |
All |
High Importance
|
Tier 2 recovery may occur when max replication delay is used and remote copy I/O is delayed
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.4 or later using Global Mirror |
Trigger |
None |
Workaround |
Set max_replication_delay value to 0 |
|
7.6.1.1 |
Global Mirror |
HU01067 |
SVC, V7000, V5000 |
High Importance
|
In a HyperSwap topology, where host I/O to a volume is being directed to both volume copies, for specific workload characteristics, I/O received within a small timing window could cause warmstarts on two nodes within separate I/O groups
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using HyperSwap |
Trigger |
None |
Workaround |
If possible, change host settings so I/O is directed only to a single volume copy (i.e. a single I/O group) for each volume |
|
7.6.1.1 |
HyperSwap |
HU01086 |
SVC |
High Importance
|
SVC reports incorrect SCSI TPGS data in an 8 node cluster causing host multi-pathing software to receive errors which may result in host outages
(show details)
Symptom |
Loss of Access to Data |
Environment |
Clusters with 8 nodes running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.6.1.1 |
Hosts |
HU01073 |
SVC |
Suggested
|
SVC CG8 nodes have internal SSDs but these are not displayed in internal storage page
(show details)
Symptom |
None |
Environment |
CG8 model systems running v7.5 or later |
Trigger |
I/O group name contains '-' |
Workaround |
Use "svcinfo lsdrive" to get internal SSDs status. |
|
7.6.1.1 |
System Monitoring |
HU01023 |
All |
Suggested
|
Remote Copy services do not transfer data after upgrade to v7.6
(show details)
Symptom |
None |
Environment |
Systems upgrading to v7.6 that will be using Remote Copy |
Trigger |
I/O group which has had no RC relationships exists. Cluster upgrade to v7.6.x. RC relationship created in the I/O group. |
Workaround |
Node reset in the secondary (aux) cluster removes the problem condition and once corrected the problem cannot return |
|
7.6.1.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01043 |
SVC |
Suggested
|
Long pause when upgrading
(show details)
Symptom |
None |
Environment |
DH8 clusters upgrading to v7.5, or later, that are using compressed volumes |
Trigger |
None |
Workaround |
None |
|
7.6.1.0 |
Compression, System Update |
HU01027 |
SVC, V7000 |
High Importance
|
Single node warmstart, or unresponsive GUI, when creating compressed volumes
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later |
Trigger |
Create a compressed volume |
Workaround |
Do not use compressed volumes or disable the GUI |
|
7.6.0.4 |
Compression |
HU01056 |
All |
High Importance
|
Both nodes in the same I/O group warmstart when using vVols
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later that are using vVols |
Trigger |
VMWare ESX generates an unsupported (i.e. non-zero) Select Report on Report LUNs |
Workaround |
None |
|
7.6.0.4 |
vVols |
HU01069 |
SVC |
High Importance
|
After upgrade from v7.5 or earlier to v7.6.0 or later all nodes may warmstart at the same time resulting in a Tier 2 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
DH8 systems running v7.5 or earlier |
Trigger |
Upgrading from v7.5 or earlier to v7.6 or later |
Workaround |
None |
|
7.6.0.4 |
System Update |
HU01029 |
SVC |
Suggested
|
Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI
(show details)
Symptom |
Single Node Warmstart |
Environment |
DH8 clusters running v7.3 or later |
Trigger |
Replace boot drive with new, uninitialised drive. Attempt to login to service assistant as "superuser" |
Workaround |
Send necessary service commands to the node from another node in the cluster. Put node into service and resync the drive |
|
7.6.0.4 |
|
HU01034 |
All |
Suggested
|
Single node warmstart stalls upgrade
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems presenting storage to large VMware configurations using SCSI CAW |
Trigger |
Upgrading from v7.5 or earlier to v7.6 |
Workaround |
None |
|
7.6.0.3 |
System Update |
HU00980 |
SVC, V7000 |
Critical
|
Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.3 or later that are using compressed volumes |
Trigger |
None |
Workaround |
None |
|
7.6.0.2 |
Compression |
HU00926 & HU00989 |
V7000, V5000, V3700, V3500 |
High Importance
|
Where an array is not experiencing any I/O, a drive initialisation may cause node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Storwize systems running v7.3 or later |
Trigger |
Drive replacement when there is no I/O to the corresponding array |
Workaround |
None |
|
7.6.0.2 |
|
HU01032 |
SVC |
High Importance
|
Batteries going on and offline can take node offline
(show details)
Symptom |
Single Node Warmstart |
Environment |
DH8 systems running v7.3 or later |
Trigger |
None |
Workaround |
None |
|
7.6.0.2 |
|
HU00899 |
All |
Suggested
|
Node warmstart observed when 16G FC or 10G FCoE adapter detects heavy network congestion
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems presenting to hosts using FCoE |
Trigger |
Severe network congestion on SAN |
Workaround |
Check SAN |
|
7.6.0.2 |
Hosts |
HU00936 |
All |
Suggested
|
During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v7.3 or later that are using compressed volumes |
Trigger |
None |
Workaround |
None |
|
7.6.0.2 |
Compression |
HU00970 |
All |
High Importance
|
Node warmstart when upgrading to v7.6.0.0 with volumes using more than 65536 extents
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems upgrading to v7.6.0.0 |
Trigger |
Volumes with more than 65536 extents |
Workaround |
None |
|
7.6.0.1 |
|
HU00935 |
All |
Suggested
|
A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
7.6.0.1 |
FlashCopy |
HU00733 |
All |
HIPER
|
Stop with access results in node warmstarts after a recovervdiskbysystem command
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All |
Trigger |
Stopping RC with -access. Remove nodes from I/O group. Run a recovervdisk command. |
Workaround |
Avoid the use of recovervdisk when any remote copy is stopped with -access |
|
7.6.0.0 |
Global Mirror |
HU00749 |
All |
HIPER
|
Multiple node warmstarts in I/O group after starting Remote Copy
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All |
Trigger |
Start RC |
Workaround |
None |
|
7.6.0.0 |
Global Mirror |
HU00757 |
All |
HIPER
|
Multiple node warmstarts when removing a Global Mirror relationship with secondary volume that has been offline
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Global Mirror |
Trigger |
A Global Mirror relationship is removed after the secondary volume has gone offline in the past |
Workaround |
Prevent secondary volumes from going offline (e.g. due to running out of space) |
|
7.6.0.0 |
Global Mirror |
HU00819 |
All |
HIPER
|
Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.4 or later using Global Mirror |
Trigger |
Connectivity issues between the primary and secondary cluster |
Workaround |
Use a lower link tolerance setting to prevent host impact on the primary |
|
7.6.0.0 |
Global Mirror |
HU00992 |
V7000, V5000, V3700, V3500 |
HIPER
|
Multiple node warmstarts and offline MDisk group during an array resync process
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize systems |
Trigger |
Array resync |
Workaround |
None |
|
7.6.0.0 |
|
HU00996 |
V5000, V3700, V3500 |
HIPER
|
T2 system recovery when running svctask chenclosure.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5 or later |
Trigger |
Run "chenclosurecanister -reset -canister" command |
Workaround |
Replace enclosure |
|
7.6.0.0 |
|
HU01004 |
All |
HIPER
|
Multiple node warmstarts when space efficient volumes are running out of capacity
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All |
Trigger |
Space Efficient volumes running out of space |
Workaround |
Monitor space efficient volume capacity |
|
7.6.0.0 |
Cache |
HU00922 |
All |
Critical
|
Loss of access to data when moving volumes to another I/O group using the GUI
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with multiple I/O groups |
Trigger |
Moving volumes between I/O groups using the GUI |
Workaround |
Use the CLI to move volumes between I/O groups |
|
7.6.0.0 |
Graphical User Interface |
HU00967 |
All |
Critical
|
Multiple warmstarts due to FlashCopy background copy limitation putting both nodes in service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5 or later using FlashCopy |
Trigger |
FlashCopy |
Workaround |
None |
|
7.6.0.0 |
FlashCopy |
HU00740 |
All |
High Importance
|
Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node
(show details)
Symptom |
Performance |
Environment |
Systems running v7.3 or later with EasyTier enabled that have many extents (>1 million) |
Trigger |
High volume workloads |
Workaround |
Disable EasyTier |
|
7.6.0.0 |
EasyTier |
HU00823 |
All |
High Importance
|
Node warmstart due to inconsistent EasyTier status when EasyTier is disabled on all managed disk groups
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later with EasyTier managed MDisk groups |
Trigger |
EasyTier is disabled on all managed disk groups in the cluster |
Workaround |
Leave EasyTier enabled on at least one managed disk group |
|
7.6.0.0 |
EasyTier |
HU00827 |
All |
High Importance
|
Both nodes in a single I/O group of a multi I/O group system can warmstart due to misallocation of volume stats entries
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with more than one I/O group |
Trigger |
Volume moves between I/O groups |
Workaround |
None |
|
7.6.0.0 |
|
HU00838 |
All |
High Importance
|
FlashCopy volume offline due to a cache flush issue
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using GMCV |
Trigger |
None |
Workaround |
Clear the pause suspended state before restarting GMCV |
|
7.6.0.0 |
Global Mirror with Change Volumes |
HU00840 |
All |
High Importance
|
Node warmstarts when Spectrum Virtualize iSCSI target receives garbled packets
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems presenting volumes to VMware hosts |
Trigger |
Corrupt iSCSI packets from host |
Workaround |
Disable TSO |
|
7.6.0.0 |
iSCSI |
HU00900 |
All |
High Importance
|
SVC FC driver warmstarts when it receives an unsupported but valid FC command
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems presenting to hosts that issue LIRR commands |
Trigger |
Host sends LIRR command |
Workaround |
Isolate hosts sending LIRR commands from SVC |
|
7.6.0.0 |
Hosts |
HU00908 |
V7000 |
High Importance
|
Battery can charge too quickly on reconditioning and take node offline
(show details)
Symptom |
Loss of Redundancy |
Environment |
V7K Gen 2 |
Trigger |
Battery temperature exceeds 48 degC |
Workaround |
Reduce data centre ambient temperature |
|
7.6.0.0 |
|
HU00913 |
SVC, V7000, V5000, V3700 |
High Importance
|
Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Metro Mirror, Global Mirror or Global Mirror with Change Volumes |
Trigger |
I/O to a global mirror volume that is larger than 128TB |
Workaround |
Keep volume size to less than 128TB when using replication services |
|
7.6.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU00915 |
SVC, V7000, V5000, V3700 |
High Importance
|
Loss of access to data when removing volumes associated with a GMCV relationship
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using GMCV |
Trigger |
Deleting a volume in a GMCV relationship, if the volume and the RC controlled FlashCopy map in this relationship have the same ID |
Workaround |
Use chrcrelationship -nomasterchange or -noauxchange <REL> before removing volumes |
|
7.6.0.0 |
Global Mirror with Change Volumes |
HU00923 |
SVC, V7000, V5000 |
High Importance
|
Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16GB FC adapters |
Trigger |
When error conditions are present in the fibre channel network |
Workaround |
None |
|
7.6.0.0 |
Reliability Availability Serviceability |
HU00982 |
SVC |
High Importance
|
Single node warmstart when software update is attempted on some DH8 nodes
(show details)
Symptom |
Single Node Warmstart |
Environment |
DH8 nodes containing a particular boot drive |
Trigger |
Software upgrade on a cluster containing DH8 nodes with model AL13SEB300 boot drive. Note that not all drives of this model exhibit this issue |
Workaround |
Use the manual software upgrade procedure, documented in the Knowledge Center, to upgrade to a fixed software version |
|
7.6.0.0 |
System Update |
HU00991 |
All |
High Importance
|
Performance impact on read pre-fetch workloads
(show details)
Symptom |
Performance |
Environment |
Systems running v7.3 or later |
Trigger |
Sequential read I/O workloads smaller than 32K |
Workaround |
Tune application sequential read I/O to 32K or greater |
|
7.6.0.0 |
Performance |
HU00995 |
V5000, V3700, V3500 |
High Importance
|
Problems with delayed I/O causes multiple node warmstarts
(show details)
Symptom |
Loss of Redundancy |
Environment |
Storwize systems |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
|
HU00999 |
All |
High Importance
|
FlashCopy volumes may go offline during an upgrade
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using FlashCopy |
Trigger |
Upgrade |
Workaround |
Stop FC maps before upgrade |
|
7.6.0.0 |
FlashCopy |
HU01001 |
All |
High Importance
|
CCU checker causes both nodes to warmstart
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All |
Trigger |
CCU Checker v15.7 |
Workaround |
Use latest version of CCU checker |
|
7.6.0.0 |
|
HU01002 |
SVC, V7000 |
High Importance
|
16Gb HBA causes multiple node warmstarts when unexpected FC frame content received
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with 16GB Fibre Channel HBAs |
Trigger |
Unexpected response received in FC frame from controller |
Workaround |
None |
|
7.6.0.0 |
Reliability Availability Serviceability |
HU01003 |
SVC |
High Importance
|
An extremely rapid increase in read IOs, on a single volume, can make it difficult for the cache component to free sufficient memory quickly enough to keep up, resulting in node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
DH8 model systems running v7.5 or later presenting storage from Flashsystem controllers |
Trigger |
An extremely rapid increase in read IOs on a single volume |
Workaround |
Disable the cache on volumes which are likely to exhibit this workload pattern |
|
7.6.0.0 |
Cache |
HU01005 |
SVC |
High Importance
|
Unable to remove ghost MDisks
(show details)
Symptom |
None |
Environment |
SVC |
Trigger |
T3 recovery |
Workaround |
None |
|
7.6.0.0 |
Backend Storage |
HU01434 |
All |
High Importance
|
A node port can become excluded, when its login status changes, leading to a load imbalance across available local ports
(show details)
Symptom |
Loss of Redundancy |
Environment |
All systems |
Trigger |
Changes to port login status |
Workaround |
None |
|
7.6.0.0 |
Backend Storage |
II14778 |
All |
High Importance
|
Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required
(show details)
Symptom |
Performance |
Environment |
Systems that have a large number of volumes (1000+) |
Trigger |
Using the GUI when a large number of volumes have regularly changing attributes |
Workaround |
Disable GUI |
|
7.6.0.0 |
Performance |
HU00470 |
All |
Suggested
|
Single node warmstart on login attempt with incorrect password issued
(show details)
Symptom |
Single Node Warmstart |
Environment |
All |
Trigger |
Log in with incorrect encrypted password |
Workaround |
None |
|
7.6.0.0 |
|
HU00536 |
All |
Suggested
|
When stopping a GMCV relationship clean up process at secondary site hangs to the point of a primary node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later |
Trigger |
Long-running FlashCopy maps where the source is a GMCV secondary volume |
Workaround |
Avoid having long-running FlashCopy maps where the source is the GMCV secondary volume. |
|
7.6.0.0 |
FlashCopy |
HU00649 |
V3700, V3500 |
Suggested
|
In rare cases an unexpected IP address may be configured on management port eth0. This IP address is neither the service IP nor the cluster IP, but most likely set by DHCP during boot
(show details)
Symptom |
None |
Environment |
All V3500 and V3700 systems |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
|
HU00732 |
All |
Suggested
|
Single node warmstart due to stalled Remote Copy recovery as a result of pinned write IOs on incorrect queue
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.2 or later |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
|
HU00746 |
V7000, V5000, V3700, V3500 |
Suggested
|
Single node warmstart during a synchronisation process of the RAID array
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with RAID arrays configured |
Trigger |
RAID array synchronisation process is initiated |
Workaround |
None |
|
7.6.0.0 |
|
HU00756 |
SVC, V7000 |
Suggested
|
Performance statistics BBCZ counter values reported incorrectly
(show details)
Symptom |
None |
Environment |
Systems using 16GB FC adapters |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
System Monitoring |
HU00794 |
All |
Suggested
|
Hang up of GM I/O stream can effect MM I/O in another remote copy stream
(show details)
Symptom |
None |
Environment |
Systems using MM with GM |
Trigger |
Increased workload on secondary cluster |
Workaround |
No not use both GM and MM in the same I/O group or use GMCV in place of GM |
|
7.6.0.0 |
Global Mirror, Metro Mirror |
HU00842 |
V7000 |
Suggested
|
Unable to clear bad blocks during an array resync process
(show details)
Symptom |
None |
Environment |
V7000 systems |
Trigger |
Array resync |
Workaround |
Create volume using the -fmtdisk option |
|
7.6.0.0 |
|
HU00890 |
V7000 |
Suggested
|
Technician port inittool redirects to SAT GUI
(show details)
Symptom |
Configuration |
Environment |
OEM V7000 Gen 2 systems |
Trigger |
Initialise cluster via the technician port |
Workaround |
Use SAT GUI or USB stick to initialise the cluster |
|
7.6.0.0 |
Graphical User Interface |
HU00903 |
SVC |
Suggested
|
Emulex firmware paused causes single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
SVC |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
|
HU00909 |
All |
Suggested
|
Single node warmstart may occur when removing an MDisk group that was using EasyTier
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later using EasyTier |
Trigger |
Removing an MDisk group |
Workaround |
Disable EasyTier on the MDisk group before removing it |
|
7.6.0.0 |
EasyTier |
HU00973 |
All |
Suggested
|
Single node warmstart when concurrently creating new volume host mappings
(show details)
Symptom |
Single Node Warmstart |
Environment |
All |
Trigger |
Concurrent host mappings for different hosts |
Workaround |
Avoid creating new host mappings for different hosts at the same time |
|
7.6.0.0 |
Hosts |
HU00975 |
All |
Suggested
|
Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
FlashCopy |
HU00993 |
V7000 |
Suggested
|
Event ID 1052 and ID 1032 entries in the eventlog are not being cleared
(show details)
Symptom |
None |
Environment |
V7000 systems |
Trigger |
When replacing node canisters |
Workaround |
None |
|
7.6.0.0 |
|
HU00994 |
V7000, V5000, V3700, V3500 |
Suggested
|
Continual VPD updates
(show details)
Symptom |
None |
Environment |
Storwize systems |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
|
HU00997 |
V7000 |
Suggested
|
Single node warmstart on PCI events
(show details)
Symptom |
Single Node Warmstart |
Environment |
V7K |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
|
HU00998 |
SVC |
Suggested
|
Support for Fujitsu Eternus DX100 S3 controller
(show details)
Symptom |
None |
Environment |
Systems presenting storage from Eternus DX100 S3 controllers |
Trigger |
None |
Workaround |
None |
|
7.6.0.0 |
Backend Storage |
HU01000 |
All |
Suggested
|
SNMP and Call Home stop working when a node reboots and the Ethernet link is down
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
Node service IP offline prior to reboot |
Workaround |
Use "chclusterip" command to manually set the config |
|
7.6.0.0 |
System Monitoring |
HU01006 |
SVC |
Suggested
|
Volume hosted on Hitachi controllers show high latency due to high I/O concurrency
(show details)
Symptom |
None |
Environment |
Systems presenting storage from Hitachi controllers |
Trigger |
SCSI initiator sends concurrent I/O that exceeds queue depth maximum of 32 |
Workaround |
Reduce queue depth per volume |
|
7.6.0.0 |
Backend Storage |
HU01007 |
All |
Suggested
|
When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later using GMCV |
Trigger |
FlashCopy mapping where source is a primary volume in a GMCV relationship |
Workaround |
Avoid creating FlashCopy mappings where the source is a primary volume in a GMCV relationship |
|
7.6.0.0 |
FlashCopy, Global Mirror with Change Volumes |
HU01008 |
SVC |
Suggested
|
Single node warmstart during code upgrade
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later |
Trigger |
Upgrade |
Workaround |
None |
|
7.6.0.0 |
System Update |
HU01009 |
V7000 |
Suggested
|
Continual increase in fans speeds after replacement
(show details)
Symptom |
None |
Environment |
V7k Gen 2 systems |
Trigger |
None |
Workaround |
Temporarily fixed using "chenclosurecanister -reset -canister" CLI command E.g. To reset canister 1 of enclosure 3 enter chenclosurecanister -reset -canister 1 3 Note: The command should only be used on one node canister at a time with a pause, of 30 minutes, between reset of node canisters in the same control enclosure. Customers running v7.5 code levels should be aware of HU00996 |
|
7.6.0.0 |
|
IC85931 |
All |
Suggested
|
When the user is copying iostats files between nodes the automatic clean up process may occasionally result in an failure message (ID 980440) in the event log
(show details)
Symptom |
None |
Environment |
Systems monitored by bespoke performance tools |
Trigger |
Copying iostats files between nodes |
Workaround |
None |
|
7.6.0.0 |
|
IT10251 |
All |
Suggested
|
Freeze time update delayed after reduction of cycle period
(show details)
Symptom |
None |
Environment |
Systems using GMCV |
Trigger |
Set cycle period to maximum and then set it to minimum |
Workaround |
Stop relationship, wait 5 seconds and then start relationship |
|
7.6.0.0 |
Global Mirror with Change Volumes |
IT10470 |
V5000, V3700, V3500 |
Suggested
|
Noisy/high speed fan
(show details)
Symptom |
None |
Environment |
V3000 & V5000 systems |
Trigger |
None |
Workaround |
Temporarily fixed using "chenclosurecanister -reset -canister" CLI command E.g. To reset canister 1 of enclosure 3 enter chenclosurecanister -reset -canister 1 3 Note: The command should only be used on one node canister at a time with a pause, of 30 minutes, between reset of node canisters in the same control enclosure. Customers running v7.5 code levels should be aware of HU00996 |
|
7.6.0.0 |
|