HU01617 |
All |
HIPER
|
Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
Avoid stopping FlashCopy mappings with the -autodelete option |
|
8.1.3.6 |
FlashCopy |
HU01865 |
All |
HIPER
|
When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros)
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v7.5 or later using HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
HyperSwap |
HU01913 |
All |
HIPER
|
A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Distributed RAID |
HU01876 |
All |
Critical
|
Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems, with NPIV enabled, attached to host ports that can act as SCSI initiators and targets |
Trigger |
Zone host initiator and target ports in with the target port WWPN then enable NPIV |
Workaround |
Unzone host or disable NPIV |
|
8.1.3.6 |
Backend Storage |
HU01887 |
All |
Critical
|
In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Host Clusters |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Command Line Interface, Host Cluster |
HU01888 & HU01997 |
All |
Critical
|
An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
FlashCopy |
HU01910 |
All |
Critical
|
When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using FlashCopy mappings with a 64KB grain size |
Trigger |
None |
Workaround |
Select a grain size of 256KB when creating FlashCopy mappings |
|
8.1.3.6 |
FlashCopy |
HU01928 |
All |
Critical
|
When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools |
HU01957 |
All |
Critical
|
Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Data Reduction Pools |
Trigger |
Initiate system upgrade |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools, System Update |
HU02013 |
All |
Critical
|
A race condition between the extent invalidation and destruction in the garbage collection process may cause a node warmstart with the possibility of offline volumes
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools |
HU02025 |
All |
Critical
|
An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools |
IT25850 |
All |
Critical
|
I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Distributed RAID |
IT27460 |
All |
Critical
|
Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
Use four ports for local to local node communications, on at least two separate fibre channel adapters per node. Port mask so that all four are usable. Use a different fibre channel adapter than the above two adapters for remote port communications. If there are issues with the FCIP tunnel, temporarily block that until it is fixed. |
|
8.1.3.6 |
Reliability Availability Serviceability |
IT29040 |
All |
Critical
|
Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID with drives of 8TB or more |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Distributed RAID, RAID |
HU01507 |
All |
High Importance
|
Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy
(show details)
Symptom |
Performance |
Environment |
All systems |
Trigger |
Create a volume with two compressed copies or add a space-efficient copy to a volume with an existing compressed copy |
Workaround |
Avoid: creating a new volume with two compressed copies; adding a SE volume copy to a volume that already possesses a compressed copy |
|
8.1.3.6 |
Volume Mirroring |
HU01761 |
All |
High Importance
|
Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.1 or later with two or more storage pools |
Trigger |
Run multiple addmdisk commands to more than one storage pool at the same time |
Workaround |
Paced addmdisk commands to one storage pool at a time |
|
8.1.3.6 |
Backend Storage |
HU01886 |
All |
High Importance
|
The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
SCSI Unmap |
HU01972 |
All |
High Importance
|
When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All systems |
Trigger |
Delete an array member using "charraymember -used unused" command |
Workaround |
None |
|
8.1.3.6 |
Distributed RAID, RAID |
HU00744 |
All |
Suggested
|
Single node warmstart due to an accounting issue within the cache component
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.3 or later |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Cache |
HU01485 |
SVC |
Suggested
|
When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
(show details)
Symptom |
None |
Environment |
SVC systems using SV1 model nodes |
Trigger |
Power up node with only one PSU powered. Power Fault LED is lit. Power up other PSU. Power Fault LED remains lit. |
Workaround |
Ensure both PSUs are powered before starting node |
|
8.1.3.6 |
System Monitoring |
HU01659 |
SVC |
Suggested
|
Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
(show details)
Symptom |
None |
Environment |
SVC systems using SV1 model nodes |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
System Monitoring |
HU01737 |
All |
Suggested
|
On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
Select a valid code image in the "Run Update Test Utility" dialog and click "Test" button |
Workaround |
Do not select a valid code image in the "Test utility" field of the "Run Update Test Utility" dialog |
|
8.1.3.6 |
System Update |
HU01857 |
All |
Suggested
|
Improved validation of user input in GUI
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Graphical User Interface |
HU01860 |
All |
Suggested
|
During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools |
HU01869 |
All |
Suggested
|
Volume copy deletion, in a Data Reduction Pool, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in deleting status
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools |
HU01915 & IT28654 |
All |
Suggested
|
Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
(show details)
Symptom |
None |
Environment |
Systems with encryption enabled |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Encryption |
HU01916 |
All |
Suggested
|
The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly
(show details)
Symptom |
None |
Environment |
Systems running v8.1 or later |
Trigger |
Upgrading from v8.1 or later |
Workaround |
lsmdisk can continue to be used to provide accurate reporting |
|
8.1.3.6 |
Command Line Interface, Graphical User Interface |
IT28433 |
All |
Suggested
|
Timing window issue in the Data Reduction Pool rehoming component can cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.6 |
Data Reduction Pools |
HU01918 |
All |
HIPER
|
Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3.4, v8.2.0.3 or v8.2.1.x using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.5 |
Data Reduction Pools |
HU01920 |
All |
Critical
|
An issue in the garbage collection process can cause node warmstarts and offline pools
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.5 |
Data Reduction Pools |
HU01492 & HU02024 |
SVC, V7000, V5000 |
HIPER
|
All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using 16Gb HBAs |
Trigger |
All ports used for inter-node communication are on the same FC adapter and a port on that adapter experiences congestion |
Workaround |
Separate inter-node traffic so that multiple adapters are used |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01873 |
All |
HIPER
|
Deleting a volume, in a Data Reduction Pool, while volume protection is enabled and when the volume was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
Delete volume in Data Reduction Pool while volume protection is enabled |
Workaround |
Either: Disable volume protection; or Remove host mappings before deleting a volume. If using scripts, modify them to unmap volumes before deletion. |
|
8.1.3.4 |
Data Reduction Pools |
HU01825 |
All |
Critical
|
Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using HyperSwap |
Trigger |
A relationship running in one direction is added to a consistency group running in the other direction whilst one of the FlashCopy maps associated with the HyperSwap relationship is still stopping/cleaning |
Workaround |
Do not add a relationship to a consistency group if they are running in opposite directions (i.e. the Primary of the consistency group and the Primary of the relationship are on different sites); Do not add a relationship to a consistency group if the relationship still has one of its FlashCopy maps in the stopping state. The clean progress needs to reach 100 percent before the relationship can be safely added. |
|
8.1.3.4 |
FlashCopy |
HU01833 |
All |
Critical
|
If both nodes, in an I/O group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related Data Reduction Pool running out of space
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
Start both nodes in an I/O group at the same time |
Workaround |
Ensure nodes in an I/O group start one at a time |
|
8.1.3.4 |
Data Reduction Pools |
HU01855 |
All |
Critical
|
Clusters using Data Reduction Pools can experience multiple warmstarts on all nodes putting them in a service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.2 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Data Reduction Pools |
HU01862 |
All |
Critical
|
When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
Remove a Data Reduction Pool with the -force option |
Workaround |
Do not use -force option when removing a Data Reduction Pool |
|
8.1.3.4 |
Data Reduction Pools |
HU01878 |
All |
Critical
|
During an upgrade from v7.8.1 or earlier to v8.1.3 or later if an MDisk goes offline then at completion all volumes may go offline
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v7.8.1 or earlier |
Trigger |
MDisk goes offline during an upgrade to v8.1.3 or later |
Workaround |
None |
|
8.1.3.4 |
System Update |
HU01885 |
All |
Critical
|
As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Data Reduction Pools |
HU02042 |
All |
Critical
|
An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
T3 recovery |
Workaround |
None |
|
8.1.3.4 |
Data Reduction Pools |
IT29853 |
V5000 |
Critical
|
After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize V5000 Gen 2 systems with Gen 1 expansion enclosures |
Trigger |
Upgrade to v8.1.1 or later |
Workaround |
None |
|
8.1.3.4 |
System Update |
HU01661 |
All |
High Importance
|
A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.6 or later using remote copy |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
HyperSwap |
HU01733 |
All |
High Importance
|
Canister information, for the High Density Expansion Enclosure, may be incorrectly reported
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems using the High Density Expansion Enclosure (92F) |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01797 |
All |
High Importance
|
Hitachi G1500 backend controllers may exhibit higher than expected latency
(show details)
Symptom |
Performance |
Environment |
Systems with Hitachi G1500 backend controllers |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Backend Storage |
HU01824 |
All |
High Importance
|
Switching replication direction for HyperSwap relationships can lead to long I/O timeouts
(show details)
Symptom |
Performance |
Environment |
Systems using HyperSwap |
Trigger |
Switch replication direction of a HyperSwap relationship |
Workaround |
None |
|
8.1.3.4 |
HyperSwap |
HU01839 |
All |
High Importance
|
Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected
(show details)
Symptom |
Performance |
Environment |
Systems running v7.5 or later presenting volumes to VMware hosts, from more than one back-end controller |
Trigger |
Issue on back-end controller takes volumes offline |
Workaround |
None |
|
8.1.3.4 |
Hosts |
HU01842 |
All |
High Importance
|
Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with Read-Intensive Drives |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Drives |
HU01846 |
SVC |
High Importance
|
Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state
(show details)
Symptom |
Loss of Redundancy |
Environment |
SVC systems using DH8 & SV1 model nodes |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01902 |
V7000, V5000 |
High Importance
|
During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade
(show details)
Symptom |
Loss of Redundancy |
Environment |
Storwize systems |
Trigger |
Upgrade |
Workaround |
None |
|
8.1.3.4 |
System Update |
HU01907 |
SVC |
High Importance
|
An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error
(show details)
Symptom |
Loss of Redundancy |
Environment |
SVC systems using SV1 model nodes |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01657 |
SVC, V7000, V5000 |
Suggested
|
The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01719 |
All |
Suggested
|
Node warmstart due to a parity error in the HBA driver firmware
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 and later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01760 |
All |
Suggested
|
FlashCopy map progress appears to be stuck at zero percent
(show details)
Symptom |
None |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
FlashCopy |
HU01778 |
All |
Suggested
|
An issue, in the HBA adapter, is exposed where a switch port keeps the link active but does not respond to link resets resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
HU01786 |
All |
Suggested
|
An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log
(show details)
Symptom |
None |
Environment |
Systems running v7.7.1 or later with SSDs |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Drives |
HU01791 |
All |
Suggested
|
Using the chhost command will remove stored CHAP secrets
(show details)
Symptom |
Configuration |
Environment |
Systems using iSCSI |
Trigger |
Run the "chhost -gui -name <host name> <host id>" command after configuring CHAP secret |
Workaround |
Set the CHAP secret whenever changing the host name |
|
8.1.3.4 |
iSCSI |
HU01821 |
SVC |
Suggested
|
An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies
(show details)
Symptom |
None |
Environment |
Systems configured as a two-node enhanced stretched cluster that are using Data Reduction Pools |
Trigger |
Upgrade |
Workaround |
Revert cluster to standard topology and remove site settings from nodes and controllers for the duration of the upgrade |
|
8.1.3.4 |
Data Reduction Pools, System Update |
HU01849 |
All |
Suggested
|
An excessive number of SSH sessions may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
Initiate a large number of SSH sessions (e.g. one session every 5 seconds) |
Workaround |
Avoid initiating excessive numbers of SSH sessions |
|
8.1.3.4 |
System Monitoring |
HU02028 |
All |
Suggested
|
An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Global Mirror, Global Mirror with Change Volumes, Metro Mirror |
IT22591 |
All |
Suggested
|
An issue in the HBA adapter firmware may result in node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Reliability Availability Serviceability |
IT25457 |
All |
Suggested
|
Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
Try to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool |
Workaround |
Use svctask splitvdiskcopy to create a separate volume from the copy that should be deleted. This leaves the original volume with a single copy and creates a new volume from the copy that was split off. Then remove the newly created volume. |
|
8.1.3.4 |
Data Reduction Pools |
IT26049 |
All |
Suggested
|
An issue with CPU scheduling may cause the GUI to respond slowly
(show details)
Symptom |
None |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.1.3.4 |
Graphical User Interface |
HU01828 |
All |
HIPER
|
Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using deduplicated volume copies |
Trigger |
Deleting a deduplication volume copy |
Workaround |
Do not delete deduplicated volume copies |
|
8.1.3.3 |
Deduplication |
HU01847 |
All |
Critical
|
FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8.1 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.1.3.3 |
FlashCopy |
HU01850 |
All |
Critical
|
When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools with deduplicated volume copies |
Trigger |
Delete last deduplication-enabled volume copy in a Data Reduction Pool |
Workaround |
If a Data Reduction Pool contains volumes with deduplication enabled keep at least one of those volumes in the pool |
|
8.1.3.3 |
Data Reduction Pools, Deduplication |
HU01852 |
All |
High Importance
|
The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.3 |
Data Reduction Pools |
HU01858 |
All |
High Importance
|
Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.3 |
Data Reduction Pools |
HU01870 |
All |
High Importance
|
LDAP server communication fails with SSL or TLS security configured
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.1.2 or v8.1.3 |
Trigger |
Configuring SSL or TLS security |
Workaround |
Temporarily set remote authentication security to none |
|
8.1.3.3 |
LDAP |
HU01790 |
All |
Suggested
|
On the Create Volumes page the Accessible I/O Groups selection may not update when the Caching I/O group selection is changed
(show details)
Symptom |
None |
Environment |
Systems with more than one I/O group |
Trigger |
Change "Caching I/O group" selection on "Create Volumes" |
Workaround |
Leave the "Caching I/O group" and "Accessible I/O Groups" selections as "default". Use "modify I/O group" action (right-click "volume" -> "modify I/O group"") to modify volume's iogrp. |
|
8.1.3.3 |
Graphical User Interface |
HU01815 |
All |
Suggested
|
In Data Reduction Pools, volume size is limited to 96TB
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.3 |
Data Reduction Pools |
HU01856 |
All |
Suggested
|
A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.3 |
Data Reduction Pools |
HU01851 |
All |
HIPER
|
When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or later using Deduplication |
Trigger |
Delete a deduplicated volume |
Workaround |
None |
|
8.1.3.2 |
Data Reduction Pools, Deduplication |
HU01837 |
All |
High Importance
|
In systems where a vVols metadata volume has been created an upgrade to v8.1.3 or later will cause a node warmstart stalling the upgrade
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.1.0, v8.1.1 or v8.1.2 that are providing vVols |
Trigger |
Upgrading to v8.1.3 or later |
Workaround |
Contact support if system is running v8.1.2. Otherwise this workaround can be used: Use svcinfo lsmetadatavdisk to find the volume id; Create a new volume copy in the same MDisk group - svctask addvdiskcopy -mdiskgrp X -autodelete <vdisk_id>; Wait until lsvdisksyncprogress no longer shows a mirror in progress; Upgrade |
|
8.1.3.2 |
System Update, vVols |
HU01835 |
All |
HIPER
|
Multiple warmstarts may be experienced due to an issue with Data Reduction Pool garbage collection where data for a volume is detected after the volume itself has been removed
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v8.1 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.1 |
Data Reduction Pools |
HU01840 |
All |
HIPER
|
When removing large numbers of volumes each with multiple copies it is possible to hit a timeout condition leading to warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later |
Trigger |
Removing large numbers of volumes with multiple copies |
Workaround |
None |
|
8.1.3.1 |
SCSI Unmap |
HU01829 |
All |
High Importance
|
An issue in statistical data collection can prevent EasyTier from working with Data Reduction Pools
(show details)
Symptom |
Performance |
Environment |
Systems running v8.1 or later using EasyTier and Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.1 |
EasyTier, Data Reduction Pools |
HU01708 |
All |
HIPER
|
A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks
(show details)
Symptom |
Data Integrity Loss |
Environment |
All systems |
Trigger |
Removing a node during an array rebuild |
Workaround |
Do not remove nodes during an array rebuild |
|
8.1.3.0 |
RAID |
HU01867 |
All |
HIPER
|
Expansion of a volume may fail due to an issue with accounting of physical capacity. All nodes will warmstart in order to clear the problem. The expansion may be triggered by writing data to a thin-provisioned or compressed volume.
(show details)
Symptom |
Multiple node warmstarts |
Environment |
Systems using Thin Provisioning or Compression |
Trigger |
Expansion of volume |
Workaround |
None |
|
8.1.3.0 |
Thin Provisioning, Compression |
HU01877 |
All |
HIPER
|
Where a volume is being expanded, and the additional capacity is to be formatted, the creation of a related volume copy may result in multiple warmstarts and a potential loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.2 or later where a volume is being expanded with the format option |
Trigger |
Add a volume copy to the expanding volume |
Workaround |
None |
|
8.1.3.0 |
Volume Mirroring, Cache |
HU01774 |
All |
Critical
|
After a failed mkhost command for an iSCSI host any I/O from that host will cause multiple warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
mkhost command fails for an iSCSI host |
Workaround |
For iSCSI hosts prevent I/O from the host if the mkhost operation fails |
|
8.1.3.0 |
iSCSI |
HU01780 |
All |
Critical
|
Migrating a volume to an image-mode volume on controllers that support SCSI unmap will trigger repeated cluster recoveries
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.1 or later with backend controllers that support SCSI unmap |
Trigger |
Migrating a volume to an image-mode volume |
Workaround |
Avoid migrating volumes to image mode on controllers that support SCSI Unmap |
|
8.1.3.0 |
SCSI Unmap |
HU01781 |
All |
Critical
|
An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
|
HU01798 |
All |
Critical
|
Manual (user-paced) upgrade to v8.1.2 may invalidate hardened data putting all nodes in service state if they are shutdown and then restarted. Automatic upgrade is not affected by this issue. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.1 or earlier |
Trigger |
Manual upgrade to v8.1.2 or later |
Workaround |
Only use automatic upgrade procedure |
|
8.1.3.0 |
System Update |
HU01802 |
All |
Critical
|
USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using encryption |
Trigger |
System Upgrade |
Workaround |
None |
|
8.1.3.0 |
Encryption |
HU01804 |
All |
Critical
|
During a system upgrade the processing required to upgrade the internal mapping between volumes and volume copies can lead to high latency impacting host I/O
(show details)
Symptom |
Performance |
Environment |
Systems running v8.1.0 or earlier with large configurations |
Trigger |
Upgrading from v8.1.0 or earlier to v8.1.1 or later |
Workaround |
None |
|
8.1.3.0 |
System Update, Hosts |
HU01809 |
SVC, V7000 |
Critical
|
An issue in the handling of extent allocation in Data Reduction Pools can result in volumes being taken offline
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Data Reduction Pools |
HU01853 |
All |
Critical
|
In a Data Reduction Pool, it is possible for metadata to be assigned incorrect values leading to offline managed disk groups
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Data Reduction Pools |
HU01752 |
SVC, V7000 |
High Importance
|
A problem with the way IBM FlashSystem FS900 handles SCSI WRITE SAME commands (without the Unmap bit set) can lead to port exclusions
(show details)
Symptom |
Performance |
Environment |
Systems running v8.1 or later in front of IBM FS900 controllers |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Backend Storage |
HU01803 |
All |
High Importance
|
The garbage collection process in Data Reduction Pool may become stalled resulting in no reclamation of free space from removed volumes
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Data Reduction Pools |
HU01818 |
All |
High Importance
|
Excessive debug logging in the Data Reduction Pools component can adversely impact system performance
(show details)
Symptom |
Performance |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Data Reduction Pools |
HU01460 |
All |
Suggested
|
If during an array rebuild another drive fails the high processing demand in RAID for handling many medium errors during the rebuild can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
Member drive failure during array rebuild |
Workaround |
None |
|
8.1.3.0 |
RAID |
HU01724 |
All |
Suggested
|
An I/O lock handling issue between nodes can lead to a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
RAID |
HU01751 |
All |
Suggested
|
When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
RAID |
HU01795 |
All |
Suggested
|
A thread locking issue in the Remote Copy component may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
|
HU01800 |
All |
Suggested
|
Under some rare circumstance a node warmstart may occur whilst creating volumes in a Data Reduction Pool
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Data Reduction Pools |
HU01801 |
All |
Suggested
|
An issue in the handling of unmaps for MDisks can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.1.0 or later |
Trigger |
None |
Workaround |
Disable unmap |
|
8.1.3.0 |
SCSI Unmap |
HU01820 |
All |
Suggested
|
When an unusual I/O request pattern is received it is possible for the handling of Data Reduction Pool metadata to become stuck, leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Data Reduction Pools |
HU01830 |
All |
Suggested
|
Missing security-enhancing HTTP response headers
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Security |
IT24900 |
V7000, V5000 |
Suggested
|
Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service
(show details)
Symptom |
None |
Environment |
All Storwize systems |
Trigger |
None |
Workaround |
None |
|
8.1.3.0 |
Reliability Availability Serviceability |