HU01706 |
All |
HIPER
|
Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash
(show details)
Symptom |
Incorrect data read from volume |
Environment |
Systems running 7.7.1.7 or 7.8.1.3 |
Trigger |
See Flash |
Workaround |
None |
|
8.1.0.2 |
|
HU01665 |
All |
HIPER
|
In environments where backend controllers are busy the creation of a new filesystem, with default settings, on a Linux host under conditions of parallel workloads can overwhelm the capabilities of the backend storage MDisk group and lead to warmstarts due to hung I/O on multiple nodes
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.1 or later |
Trigger |
mkfs with default settings |
Workaround |
Throttle unmap commands using the offloaded I/O throttle to prevent the system being overloaded prior to issuing mkfs with discard mkthrottle -type offload -bandwidth xxxx https://www.ibm.com/support/knowledgecenter/en/STPVGU_7.8.1/com.ibm.storage.svc.console.781.doc/svc_mkthrottle.html OR Create the file system without the capability to discard (unmap): mkfs with -E nodiscards flag |
|
8.1.0.1 |
Hosts |
HU01670 |
All |
HIPER
|
Enabling RSA without a valid service IP address may cause multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using SRA |
Trigger |
Enabling RSA without a valid service IP address |
Workaround |
Configure an IPv4 service IP address for each node |
|
8.1.0.1 |
RSA |
IT22802 |
V7000, V5000 |
HIPER
|
A memory management issue in cache may cause multiple node warmstarts possibly leading to a loss of access and necessitating a Tier 3 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize V7000 Gen2 and V5030 systems with 32 GB of RAM |
Trigger |
None |
Workaround |
After upgrade to v8.1.0.0 perform node remove/add on each node in turn to prevent the problem occurring |
|
8.1.0.1 |
Cache |
IT23034 |
All |
Critical
|
With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with HyperSwap volumes and mirrored copies at a single site |
Trigger |
Using rmvolumecopy to remove a copy from a HyperSwap auxiliary volume |
Workaround |
None |
|
8.1.0.1 |
HyperSwap |
HU01700 |
All |
High Importance
|
If a thin-provisioned or compressed volume is deleted, and another volume is immediately created with the same real capacity, warmstarts may occur
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using space-efficient or compressed volumes |
Trigger |
Remove a SE or compressed volume then, immediately, create a volume of the same real capacity |
Workaround |
None |
|
8.1.0.1 |
Compression, Thin Provisioning |
HU01673 |
All |
Suggested
|
GUI rejects passwords that include special characters
(show details)
Symptom |
None |
Environment |
Systems running v8.1.0.0 |
Trigger |
Enter password with special characters in the GUI |
Workaround |
Use passwords without special characters |
|
8.1.0.1 |
Graphical User Interface |
HU01239 & HU01255 & HU01586 |
All |
HIPER
|
The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with 16Gb HBAs |
Trigger |
Faulty SAN hardware (adapter, SFP, switch) |
Workaround |
None |
|
8.1.0.0 |
Reliability Availability Serviceability |
HU01646 |
All |
HIPER
|
A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with 16Gb HBAs |
Trigger |
Faulty SAN hardware (adapter; SFP; switch) |
Workaround |
None |
|
8.1.0.0 |
Reliability Availability Serviceability |
HU01940 |
All |
HIPER
|
Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Change of drive use |
Workaround |
None |
|
8.1.0.0 |
Drives |
HU01321 |
All |
Critical
|
Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Remote Copy |
Trigger |
Change the direction of a remote copy relationship |
Workaround |
None |
|
8.1.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01490 |
All |
Critical
|
When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
- A addhostport command with iqn2 and iqn1 (where iqn1 is already recorded) is entered;
- This command attempts to add iqn2 but determines that iqn1 is a duplicate and the CLI command fails;
- Later whenever a login request from iqn2 is received internal checking detects an inconsistency and warmstarts the node
|
Workaround |
Do not use multiple IQNs in iSCSI add/remove commands |
|
8.1.0.0 |
iSCSI |
HU01509 |
All |
Critical
|
Where a drive is generating medium errors, an issue in the handling of array rebuilds can result in an MDisk group being repeated taken offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.6 or later |
Trigger |
Drive with medium errors |
Workaround |
None |
|
8.1.0.0 |
RAID |
HU01524 |
All |
Critical
|
When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
RAID |
HU01549 |
All |
Critical
|
During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.5 or earlier with iSCSI connected HyperV clustered hosts |
Trigger |
Upgrade to v7.6 or later |
Workaround |
None |
|
8.1.0.0 |
System Update, iSCSI |
HU01572 |
All |
Critical
|
SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
iSCSI |
HU01583 |
All |
Critical
|
Running mkhostcluster with duplicate host names or IDs in the seedfromhost argument will cause a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
System running v7.7.1 or later |
Trigger |
Run mkhostcluster with duplicate host names or IDs in the seedfromhost argument |
Workaround |
Do not specify duplicate host names or IDs in the seedfromhost argument |
|
8.1.0.0 |
Host Cluster |
IC57642 |
All |
Critical
|
A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with more than 2 nodes |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Reliability Availability Serviceability |
IT17919 |
All |
Critical
|
A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.2 or later using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01476 |
All |
High Importance
|
A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems using Remote Copy |
Trigger |
Rename remote copy relationship using GUI or CLI |
Workaround |
None |
|
8.1.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01481 |
All |
High Importance
|
A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or later using HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
HyperSwap |
HU01506 |
All |
High Importance
|
Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All systems |
Trigger |
None |
Workaround |
Do not use the -autodelete option when creating a volume copy |
|
8.1.0.0 |
Volume Mirroring |
HU01569 |
SVC |
High Importance
|
When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes
(show details)
Symptom |
Performance |
Environment |
SVC systems using compression |
Trigger |
High compression workloads |
Workaround |
None |
|
8.1.0.0 |
Compression |
HU01579 |
All |
High Importance
|
In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with drives of type HUSMM80xx0ASS20 |
Trigger |
Attempt to assign drive type as quorum |
Workaround |
Manually assign a different drive type as quorum |
|
8.1.0.0 |
Drives, Quorum |
HU01584 |
All |
High Importance
|
An issue in array indexing can cause a RAID array to go offline repeatedly
(show details)
Symptom |
Offline Volumes |
Environment |
System running v7.6 or later |
Trigger |
None |
Workaround |
Avoid doing member exchanges |
|
8.1.0.0 |
RAID |
HU01610 |
All |
High Importance
|
The handling of the background copy backlog by FlashCopy can cause latency for other unrelated FlashCopy maps
(show details)
Symptom |
Performance |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
Minimise the use of inter I/O group FlashCopy source and target volumes |
|
8.1.0.0 |
FlashCopy |
HU01614 |
All |
High Importance
|
After a node is upgraded hosts defined as TPGS may have paths set to inactive
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.6 or earlier with host type TPGS |
Trigger |
Upgrade to v7.7 or later |
Workaround |
None |
|
8.1.0.0 |
Hosts |
HU01623 |
All |
High Importance
|
An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships
(show details)
Symptom |
Performance |
Environment |
Systems running v7.1 or later using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01626 |
All |
High Importance
|
Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.8 or later |
Trigger |
Downgrade to v7.7.1 or earlier |
Workaround |
None |
|
8.1.0.0 |
System Update |
HU01630 |
All |
High Importance
|
When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or earlier using FlashCopy |
Trigger |
Upgrade to v7.6 or later |
Workaround |
None |
|
8.1.0.0 |
FlashCopy |
HU01636 |
V5000 |
High Importance
|
A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller
(show details)
Symptom |
Performance |
Environment |
Systems presenting storage to hosts with N2225 adapters |
Trigger |
Host with N2225 adapters running Windows Server 2012R2 |
Workaround |
None |
|
8.1.0.0 |
Hosts |
HU01638 |
All |
High Importance
|
When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v7.5 or earlier |
Trigger |
Upgrade to v7.6 or later with a cluster in the same zone running v6.1 or earlier |
Workaround |
Unzone any cluster running v5.1 or earlier from the cluster being upgraded |
|
8.1.0.0 |
System Update |
HU01697 |
All |
High Importance
|
A timeout issue in RAID member management can lead to multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
RAID |
HU01346 |
All |
Suggested
|
An unexpected error 1036 may display on the event log even though a canister was never physically removed
(show details)
Symptom |
None |
Environment |
All Storwize Gen 2 or later systems |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
System Monitoring |
HU01385 |
All |
Suggested
|
A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or v7.7 that are using Hyperswap |
Trigger |
Issue an rmvolumecopy or rmrcrelationship command whilst hosts are still actively using the Hyperswap volume |
Workaround |
Do not remove a Hyperswap volume or relationship whilst hosts are still mapped to it |
|
8.1.0.0 |
HyperSwap |
HU01396 |
All |
Suggested
|
HBA firmware resources can become exhausted resulting in node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Hosts |
HU01446 |
All |
Suggested
|
Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6.1 or later with VMware hosts using VAAI CAW feature |
Trigger |
None |
Workaround |
Avoid overloading the back-end |
|
8.1.0.0 |
Hosts |
HU01454 |
All |
Suggested
|
During an array rebuild a quiesce operation can become stalled leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.7 or later |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
RAID, Distributed RAID |
HU01457 |
V7000 |
Suggested
|
In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI
(show details)
Symptom |
None |
Environment |
Systems running v7.7.1 or later |
Trigger |
None |
Workaround |
Performed required actions in the CLI |
|
8.1.0.0 |
Graphical User Interface |
HU01458 |
All |
Suggested
|
A node warmstart may occur when hosts submit writes to Remote Copy secondary volumes (which are in a read-only mode)
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
Host write I/O to a read-only secondary volume |
Workaround |
Stop hosts sending write I/O to Remote Copy secondary volumes |
|
8.1.0.0 |
Metro Mirror, Global Mirror, Global Mirror with Change Volumes |
HU01467 |
All |
Suggested
|
Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Increase the sampling interval |
|
8.1.0.0 |
System Monitoring |
HU01472 |
All |
Suggested
|
A locking issue in Global Mirror can cause a warmstart on the secondary cluster
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Global Mirror |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Global Mirror |
HU01521 |
All |
Suggested
|
Remote Copy does not correctly handle STOP commands for relationships which may lead to node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Metro Mirror, Global Mirror, Global Mirror with Change Volumes |
HU01522 |
All |
Suggested
|
A node warmstart may occur when a Fibre Channel frame is received with an unexpected value for host login type
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.4 or later |
Trigger |
Unexpected value for host login type in a Fibre Channel frame |
Workaround |
Ensure that hosts are fully supported from an interop perspective and configured correctly so that they do not send an unexpected "host login type" |
|
8.1.0.0 |
Hosts |
HU01545 |
All |
Suggested
|
A locking issue in the stats collection process may result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
System Monitoring |
HU01550 |
All |
Suggested
|
Removing a volume with -force while it is still receiving I/O from a host may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 or later |
Trigger |
Using rmvdisk with -force |
Workaround |
Enable volume protection |
|
8.1.0.0 |
|
HU01554 |
All |
Suggested
|
Node warmstart may occur during a livedump collection
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with 8GB RAM |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Support Data Collection |
HU01556 |
All |
Suggested
|
The handling of memory pool usage by Remote Copy may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01563 |
V7000 |
Suggested
|
Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Unified configurations |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
|
HU01573 |
All |
Suggested
|
Node warmstart due to a stats collection scheduling issue
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
System Monitoring |
HU01582 |
All |
Suggested
|
A compression issue in IP replication can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.7 or later using IP Replication |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
IP Replication |
HU01615 |
All |
Suggested
|
A timing issue relating to process communication can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
|
HU01622 |
All |
Suggested
|
If a Dense Draw enclosure is put into maintenance mode during an upgrade of the enclosure management firmware then further upgrades to adjacent enclosures will be prevented
(show details)
Symptom |
Configuration |
Environment |
Systems running v7.8 or later with Dense Draw enclosures |
Trigger |
Putting a Dense Draw enclosure in maintenance mode during an upgrade |
Workaround |
None |
|
8.1.0.0 |
System Update |
HU01631 |
All |
Suggested
|
A memory leak in EasyTier when pools are in Balanced mode can lead to node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8 or later using EasyTier |
Trigger |
Pools in Balanced mode |
Workaround |
Create hybrid pools with multiple tiers of storage and set Easy tier mode to Active |
|
8.1.0.0 |
EasyTier |
HU01653 |
All |
Suggested
|
An automatic Tier 3 recovery process may fail due to a RAID indexing issue
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Reliability Availability Serviceability, RAID |
HU01679 |
All |
Suggested
|
An issue in the RAID component can very occasionally cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.7 or later |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
RAID |
HU01704 |
All |
Suggested
|
In systems using HyperSwap a rare timing window issue can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
HyperSwap |
HU01729 |
All |
Suggested
|
Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.1.0.0 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
HU01736 |
SVC |
Suggested
|
A single node warmstart may occur when the topology setting of the cluster is changed
(show details)
Symptom |
Single Node Warmstart |
Environment |
SVC systems using enhanced stretched cluster or Hyperswap |
Trigger |
Timing window between I/O requests and CLI command processing |
Workaround |
None |
|
8.1.0.0 |
HyperSwap |
IT19387 |
V7000, V5000 |
Suggested
|
When two Storwize I/O groups are connected to each other (via direct connect) 1550 errors will be logged and reappear when marked as fixed
(show details)
Symptom |
None |
Environment |
Storwize systems with direct connected I/O groups |
Trigger |
Direct connected I/O groups |
Workaround |
None |
|
8.1.0.0 |
System Monitoring |