HU01193 |
All |
HIPER
|
A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
Drive failure |
Workaround |
None |
|
7.7.0.5 |
Distributed RAID |
HU01447 |
All |
HIPER
|
The management of FlashCopy grains during a restore process can miss some IOs
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v7.5 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
FlashCopy |
HU01340 |
All |
Critical
|
A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5 or earlier |
Trigger |
Upgrade to v7.7.x |
Workaround |
None |
|
7.7.0.5 |
System Update |
HU01109 |
SVC, V7000, V5000 |
High Importance
|
Multiple nodes can experience a lease expiry when a FC port is having communications issues
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
Reliability Availability Serviceability |
HU01184 |
All |
High Importance
|
When removing multiple MDisks node warmstarts may occur
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later |
Trigger |
Multiple rmmdisk commands issued in rapid succession |
Workaround |
Remove MDisks one at a time and let migration complete before proceeding to next MDisk removal |
|
7.7.0.5 |
|
HU01185 |
All |
High Importance
|
iSCSI target closes connection when there is a mismatch in sequence number
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Out of sequence I/O |
Workaround |
None |
|
7.7.0.5 |
iSCSI |
HU01221 |
SVC, V7000, V5000 |
High Importance
|
Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
|
HU01223 |
All |
High Importance
|
The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.3 or later using MetroMirror |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
Metro Mirror |
HU01226 |
All |
High Importance
|
Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later using Global Mirror |
Trigger |
Changing max replication delay to a small non-zero number |
Workaround |
Do not change max replication delay to below 30 seconds |
|
7.7.0.5 |
Global Mirror |
HU01402 |
V7000 |
High Importance
|
Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available
(show details)
Symptom |
Loss of Redundancy |
Environment |
V7000 Gen 1 systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
Reliability Availability Serviceability |
IT16012 |
SVC |
High Importance
|
Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance
(show details)
Symptom |
Performance |
Environment |
Systems running v7.3 or later |
Trigger |
Internal node boot drive RAID scrub process |
Workaround |
Try to avoid performing high I/O workloads (including copy services) at 1am on Sundays |
|
7.7.0.5 |
Performance |
HU01017 |
All |
Suggested
|
The result of CLI commands are sometimes not promptly presented in the GUI
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
In the GUI, navigate to "GUI Preferences" ("GUI Preferences->General" in v7.6 or later) and refresh the GUI cache. |
|
7.7.0.5 |
Graphical User Interface |
HU01050 |
All |
Suggested
|
DRAID rebuild incorrectly reports event code 988300
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
Distributed RAID |
HU01187 |
All |
Suggested
|
Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times
(show details)
Symptom |
Performance |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
Avoid R5 array configurations |
|
7.7.0.5 |
|
HU01198 |
All |
Suggested
|
Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later using Comprestimator |
Trigger |
Run svctask analyzevdiskbysystem |
Workaround |
Avoid using svctask analyzevdiskbysystem |
|
7.7.0.5 |
Comprestimator |
HU01213 |
All |
Suggested
|
The LDAP password is visible in the auditlog
(show details)
Symptom |
None |
Environment |
Systems running v7.7 or later |
Trigger |
Setup LDAP authentication |
Workaround |
None |
|
7.7.0.5 |
LDAP |
HU01214 |
All |
Suggested
|
GUI and snap missing EasyTier heatmap information
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
Download the files individually via CLI |
|
7.7.0.5 |
Support Data Collection |
HU01219 |
SVC, V7000, V5000 |
Suggested
|
Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
|
HU01227 |
All |
Suggested
|
High volumes of events may cause the email notifications to become stalled
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
More than 15 events per second |
Workaround |
None |
|
7.7.0.5 |
System Monitoring |
HU01234 |
All |
Suggested
|
After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI
(show details)
Symptom |
None |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Upgrade to v7.6 or later from v7.5 or earlier |
Workaround |
None |
|
7.7.0.5 |
iSCSI |
HU01247 |
All |
Suggested
|
When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
Stop the same FlashCopy consistency group twice in rapid succession using -force on second attempt |
Workaround |
Avoid stopping the same flash copy consistency group more than once |
|
7.7.0.5 |
FlashCopy |
HU01269 |
All |
Suggested
|
A rare timing conflict between two process may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
|
HU01292 |
All |
Suggested
|
Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.7.0 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
FlashCopy |
HU01374 |
All |
Suggested
|
Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.7.0 or later using Global Mirror |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
Global Mirror |
HU01399 |
All |
Suggested
|
For certain config nodes the CLI Help commands may not work
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Use Knowledge Center |
|
7.7.0.5 |
Command Line Interface |
IT17302 |
V5000, V3700, V3500 |
Suggested
|
Unexpected 45034 1042 entries in the Event Log
(show details)
Symptom |
None |
Environment |
Systems running v7.7 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.5 |
System Monitoring |
IT16337 |
SVC, V7000, V5000 |
High Importance
|
Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.7.0.4 |
|
HU01258 |
SVC |
Suggested
|
A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
(show details)
Symptom |
None |
Environment |
SVC systems running v7.4 or later in a stretched cluster configuration |
Trigger |
Site/node failover |
Workaround |
None |
|
7.7.0.4 |
Compression |
IT17102 |
All |
Suggested
|
Where the maximum number of I/O requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.4 |
|
HU00271 |
All |
High Importance
|
An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using GM |
Trigger |
None |
Workaround |
None |
|
7.7.0.3 |
Global Mirror |
HU01140 |
All |
High Importance
|
EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance
(show details)
Symptom |
Performance |
Environment |
Systems running v7.3 or later using EasyTier |
Trigger |
None |
Workaround |
Add Enterprise-class drives to the MDisk or MDisk group that is experiencing unbalanced workloads |
|
7.7.0.3 |
EasyTier |
HU01141 |
All |
High Importance
|
Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using IP Replication |
Trigger |
Enter mkippartnership CLI command |
Workaround |
Ensure partner cluster IP can be pinged before issuing a mkippartnership CLI command |
|
7.7.0.3 |
IP Replication |
HU01182 |
SVC, V7000, V5000 |
High Importance
|
Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
16G FC HBA receives a SCSI TUR command with Total XFER LEN > 0 |
Workaround |
None |
|
7.7.0.3 |
|
HU01183 |
SVC, V7000, V5000 |
High Importance
|
Node warmstart due to 16Gb HBA firmware entering a rare deadlock condition in its ELS frame handling
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.4 or later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
7.7.0.3 |
|
HU01210 |
SVC |
High Importance
|
A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster
(show details)
Symptom |
Loss of Redundancy |
Environment |
CG8 systems running v7.6.1 or later |
Trigger |
Broken or disabled TPM |
Workaround |
None |
|
7.7.0.3 |
|
HU01024 |
V7000, V5000, V3700, V3500 |
Suggested
|
A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.3 |
|
HU01097 |
V5000, V3700, V3500 |
Suggested
|
For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid
(show details)
Symptom |
None |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.3 |
Support Data Collection |
HU01194 |
All |
Suggested
|
A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later that are using vVols |
Trigger |
None |
Workaround |
None |
|
7.7.0.3 |
vVols |
HU01212 |
All |
Suggested
|
GUI displays an incorrect timezone description for Moscow
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Use CLI to track |
|
7.7.0.3 |
Graphical User Interface |
HU01208 |
All |
HIPER
|
After upgrading to v7.7 or later from v7.5 or earlier and then creating a DRAID array, with a node reset, the system may encounter repeated node warmstarts which will require a Tier 3 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5.0.0 or earlier |
Trigger |
Code upgrade, create DRAID array and then reset nodes |
Workaround |
None |
|
7.7.0.2 |
Distributed RAID |
HU01192 |
V7000 |
Suggested
|
Some V7000 gen1 systems have an unexpected WWNN value which can cause a single node warmstart when upgrading to v7.7
(show details)
Symptom |
Single Node Warmstart |
Environment |
V7000 Gen 1 systems running v7.7 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.1 |
|
HU00990 |
All |
HIPER
|
A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.4 or later using Global Mirror |
Trigger |
Node warmstart on the secondary cluster |
Workaround |
Use GMCV |
|
7.7.0.0 |
Global Mirror |
HU01181 |
All |
HIPER
|
Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v6.4 or later that are using compressed volumes |
Trigger |
Compressed volumes larger than 96TiB |
Workaround |
Limit compressed volume capacity to 96TiB |
|
7.7.0.0 |
Compression |
HU00301 |
SVC |
High Importance
|
A 4-node enhanced stretched cluster with non-mirrored volumes may get stuck in stalled_non_redundant during an upgrade
(show details)
Symptom |
Loss of Redundancy |
Environment |
SVC systems with four nodes in an enhanced stretched cluster topology using non-mirrored volumes |
Trigger |
Upgrade |
Workaround |
Create two copies, one in each site, for all volumes |
|
7.7.0.0 |
System Update |
HU00447 |
All |
High Importance
|
A Link Reset on an 8Gbps Fibre Channel port causes fabric logout/login
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with 8Gbps FC ports |
Trigger |
Fibre Channel Link Reset |
Workaround |
None |
|
7.7.0.0 |
Reliability Availability Serviceability |
HU00719 |
V3700, V3500 |
High Importance
|
After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Storwize V3500 & V3700 systems |
Trigger |
Power outage |
Workaround |
None |
|
7.7.0.0 |
|
HU00897 |
All |
High Importance
|
Spectrum Virtualize iSCSI target ignores maxrecvdatasegmentlength leading to host I/O error
(show details)
Symptom |
Performance |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Storwize target sends more than 8192 bytes for an iSCSI PDU |
Workaround |
Use "iscsiadm modify target-param -p maxrecvdataseglen=32768" |
|
7.7.0.0 |
iSCSI |
HU01039 |
All |
High Importance
|
When volumes, which are still in a relationship, are forcefully removed then a node may experience warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using HyperSwap |
Trigger |
Removing a volume in a HyperSwap topology using -force |
Workaround |
Remove the associated relationship before removing the volume |
|
7.7.0.0 |
|
HU01060 |
All |
High Importance
|
Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using FlashCopy |
Trigger |
Prior node warmstarts |
Workaround |
None |
|
7.7.0.0 |
FlashCopy |
HU01069 |
SVC |
High Importance
|
After upgrade from v7.5 or earlier to v7.6.0 or later all nodes may warmstart at the same time resulting in a Tier 2 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
DH8 systems running v7.5 or earlier |
Trigger |
Upgrading from v7.5 or earlier to v7.6 or later |
Workaround |
None |
|
7.7.0.0 |
System Update |
HU01078 |
SVC |
High Importance
|
When the rmnode command is run it removes persistent reservation data to prevent a stuck reservation. MS Windows and Hyper-V cluster design constantly monitors the reservation table and takes the associated volume offline whilst recovering cluster membership. This can result in a brief outage at the host level.
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems running v7.3 or later with Microsoft Windows or Hyper-V clustered hosts |
Trigger |
rmnode or chnodehw commands |
Workaround |
None |
|
7.7.0.0 |
|
HU01082 |
V7000, V5000, V3700, V3500 |
High Importance
|
A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize systems |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
Hosts |
HU01165 |
All |
High Importance
|
When a SE volume goes offline both nodes may experience multiple warmstarts and go to service state
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later |
Trigger |
A space efficient volume going offline due to lack of free capacity |
Workaround |
None |
|
7.7.0.0 |
Thin Provisioning |
HU01180 |
All |
High Importance
|
When creating a snapshot on an ESX host, using vVols, a Tier 2 recovery may occur
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later |
Trigger |
Using the snapshot functionality in an ESX host on a vVol volume with insufficient FlashCopy bitmap capacity |
Workaround |
Ensure FlashCopy bitmap space is sufficient (use lsiogrp to determine and chiogrp to change) |
|
7.7.0.0 |
Hosts, vVols |
HU01186 |
All |
High Importance
|
Volumes going offline briefly may disrupt the operation of Remote Copy leading to a loss of access by hosts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.4 or later using Global Mirror |
Trigger |
Volumes going offline |
Workaround |
None |
|
7.7.0.0 |
Global Mirror |
HU01188 |
SVC |
High Importance
|
Quorum lease times are not set correctly impacting system availability
(show details)
Symptom |
Loss of Access to Data |
Environment |
DH8 systems running v7.5 or later |
Trigger |
None |
Workaround |
Activate each quorum using the chquorum command |
|
7.7.0.0 |
|
HU01189 |
All |
High Importance
|
Improvement to DRAID dependency calculation when handling multiple drive failures
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later using DRAID |
Trigger |
Multiple drive failures |
Workaround |
None |
|
7.7.0.0 |
Distributed RAID |
HU01245 |
All |
High Importance
|
Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Global Mirror with Change Volumes |
Trigger |
Add/remove a volume copy from a primary change volume |
Workaround |
Avoid adding or removing primary change volumes while the relationship is in use |
|
7.7.0.0 |
Global Mirror with Change Volumes |
IT12088 |
V7000, V5000, V3700, V3500 |
High Importance
|
If a node encounters a SAS-related warmstart the node can remain in service with a 504/505 error, indicating that is was unable to pick up the necessary VPD to become active again
(show details)
Symptom |
Loss of Redundancy |
Environment |
Storwize systems running v7.5 or later |
Trigger |
SAS-related node warmstart |
Workaround |
None |
|
7.7.0.0 |
|
HU00521 |
All |
Suggested
|
Remote Copy relationships may be stopped and lose synch when a single node warmstart occurs at the secondary site
(show details)
Symptom |
None |
Environment |
Systems running v7.2 or later using Global Mirror |
Trigger |
Single node warmstart at secondary site |
Workaround |
None |
|
7.7.0.0 |
Global Mirror |
HU00831 (reverted) |
All |
Suggested
|
This APAR was removed prior to v7.7.0 GA in light of issues with the fix. It will be implemented in a future PTF
|
7.7.0.0 |
Cache |
HU00886 |
All |
Suggested
|
Single node warmstart due to CLI startfcconsistgrp command timeout
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later using GM and FlashCopy |
Trigger |
None |
Workaround |
Where at least one source volume of a FlashCopy consistency group is a Global Mirror primary or secondary volume stop the corresponding relationship before starting the FlashCopy consistency group. Once FlashCopy consistency group is running the relationship can be restarted. |
|
7.7.0.0 |
Global Mirror, FlashCopy |
HU00910 |
All |
Suggested
|
Handling of I/O to compressed volumes can result in a timeout condition that is resolved by a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with compressed volumes |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
Compression |
HU00928 |
V7000 |
Suggested
|
For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed
(show details)
Symptom |
None |
Environment |
Storwize V7000 Gen 1 systems running v7.3 or later with large configurations |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
Drives |
HU01007 |
All |
Suggested
|
When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.5 or later using GMCV |
Trigger |
FlashCopy mapping where source is a primary volume in a GMCV relationship |
Workaround |
Avoid creating FlashCopy mappings where the source is a primary volume in a GMCV relationship |
|
7.7.0.0 |
FlashCopy, Global Mirror with Change Volumes |
HU01074 |
All |
Suggested
|
An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later |
Trigger |
None |
Workaround |
Avoid changes to the email notification feature |
|
7.7.0.0 |
System Monitoring |
HU01075 |
All |
Suggested
|
Multiple node warmstarts can occur due to an unstable Remote Copy domain after an upgrade to v7.6.0
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later using Global Mirror |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
Global Mirror |
HU01089 |
All |
Suggested
|
svcconfig backup fails when an I/O group name contains a hyphen
(show details)
Symptom |
None |
Environment |
Systems running v7.4 or later |
Trigger |
I/O Group Name contains a hyphen character |
Workaround |
Amend I/O Group Name to a string that does not contain hyphen, dot or white-space characters |
|
7.7.0.0 |
|
HU01096 |
SVC, V7000 |
Suggested
|
Batteries may be seen to continuously recondition
(show details)
Symptom |
None |
Environment |
DH8 & V7000 systems |
Trigger |
None |
Workaround |
Replace battery |
|
7.7.0.0 |
|
HU01104 |
SVC, V7000, V5000, V3700 |
Suggested
|
When using GMCV relationships if a node in an I/O group loses communication with its partner it may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Global Mirror with Change Volumes |
Trigger |
Node loses communication with partner |
Workaround |
Create a GM relationship using small volumes in each I/O group |
|
7.7.0.0 |
Global Mirror with Change Volumes |
HU01110 |
All |
Suggested
|
Spectrum Virtualize supports SSH connections using RC4 based ciphers
(show details)
Symptom |
None |
Environment |
Systems running v7.5 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
|
HU01143 |
All |
Suggested
|
Where nodes are missing config files some services will be prevented from starting
(show details)
Symptom |
None |
Environment |
Systems running v7.6 or later |
Trigger |
None |
Workaround |
Warmstart the config node |
|
7.7.0.0 |
|
HU01144 |
V7000 |
Suggested
|
Single node warmstart on the config node due to GUI contention
(show details)
Symptom |
Single Node Warmstart |
Environment |
V7000 Gen 2 systems running v7.5 or later |
Trigger |
None |
Workaround |
Disable GUI |
|
7.7.0.0 |
Graphical User Interface |
HU01156 |
All |
Suggested
|
Single node warmstart due to an invalid FCoE frame from a HP-UX host
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems presenting storage to HP-UX host via FCoE |
Trigger |
Unsupported Rx frame size (960 bytes) sent by initiator |
Workaround |
None |
|
7.7.0.0 |
Hosts |
HU01240 |
All |
Suggested
|
For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time
(show details)
Symptom |
None |
Environment |
Systems running v7.3 or later |
Trigger |
No write I/O for >120 seconds |
Workaround |
Ensure relevant volume receives at least one write I/O per 120 second interval |
|
7.7.0.0 |
|
HU01274 |
All |
Suggested
|
DRAID lsarraysyncprogress command may appear to show array synchronisation stuck at 99%
(show details)
Symptom |
None |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
Distributed RAID |
IT15366 |
All |
Suggested
|
CLI command lsportsas may show unexpected port numbering
(show details)
Symptom |
None |
Environment |
Systems running v7.6.1 or later |
Trigger |
None |
Workaround |
None |
|
7.7.0.0 |
|