HU01918 |
All |
HIPER
|
Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3.4, v8.2.0.3 or v8.2.1.x using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.4 |
Data Reduction Pools |
HU01920 |
All |
Critical
|
An issue in the garbage collection process can cause node warmstarts and offline pools
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.4 |
Data Reduction Pools |
HU01906 |
FS9100 |
HIPER
|
Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
FlashSystem 9100 family systems |
Trigger |
None |
Workaround |
None |
|
8.2.0.3 |
Reliability Availability Serviceability |
HU01862 |
All |
Critical
|
When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
Remove a Data Reduction Pool with the -force option |
Workaround |
Do not use -force option when removing a Data Reduction Pool |
|
8.2.0.3 |
Data Reduction Pools |
HU01876 |
All |
Critical
|
Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems, with NPIV enabled, attached to host ports that can act as SCSI initiators and targets |
Trigger |
Zone host initiator and target ports in with the target port WWPN then enable NPIV |
Workaround |
Unzone host or disable NPIV |
|
8.2.0.3 |
Backend Storage |
HU01885 |
All |
Critical
|
As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.3 |
Data Reduction Pools |
HU01934 |
FS9100 |
High Importance
|
An issue in the handling of faulty canister components can lead to multiple node warmstarts for that canister
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
FlashSystem 9100 family systems |
Trigger |
None |
Workaround |
None |
|
8.2.0.3 |
Reliability Availability Serviceability |
HU01821 |
SVC |
Suggested
|
An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies
(show details)
Symptom |
None |
Environment |
Systems configured as a two-node enhanced stretched cluster that are using Data Reduction Pools |
Trigger |
Upgrade |
Workaround |
Revert cluster to standard topology and remove site settings from nodes and controllers for the duration of the upgrade |
|
8.2.0.3 |
Data Reduction Pools, System Update |
HU01849 |
All |
Suggested
|
An excessive number of SSH sessions may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
Initiate a large number of SSH sessions (e.g. one session every 5 seconds) |
Workaround |
Avoid initiating excessive numbers of SSH sessions |
|
8.2.0.3 |
System Monitoring |
IT25457 |
All |
Suggested
|
Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
Try to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool |
Workaround |
Use svctask splitvdiskcopy to create a separate volume from the copy that should be deleted. This leaves the original volume with a single copy and creates a new volume from the copy that was split off. Then remove the newly created volume. |
|
8.2.0.3 |
Data Reduction Pools |
IT26049 |
All |
Suggested
|
An issue with CPU scheduling may cause the GUI to respond slowly
(show details)
Symptom |
None |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.2.0.3 |
Graphical User Interface |
HU01828 |
All |
HIPER
|
Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using deduplicated volume copies |
Trigger |
Deleting a deduplication volume copy |
Workaround |
Do not delete deduplicated volume copies |
|
8.2.0.2 |
Deduplication |
HU01847 |
All |
Critical
|
FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8.1 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
FlashCopy |
HU01850 |
All |
Critical
|
When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools with deduplicated volume copies |
Trigger |
Delete last deduplication-enabled volume copy in a Data Reduction Pool |
Workaround |
If a Data Reduction Pool contains volumes with deduplication enabled keep at least one of those volumes in the pool |
|
8.2.0.2 |
Data Reduction Pools, Deduplication |
HU02042 |
All |
Critical
|
An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
T3 recovery |
Workaround |
None |
|
8.2.0.2 |
Data Reduction Pools |
HU01852 |
All |
High Importance
|
The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
Data Reduction Pools |
HU01858 |
All |
High Importance
|
Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
Data Reduction Pools |
HU01881 |
FS9100 |
High Importance
|
An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts
(show details)
Symptom |
Loss of Redundancy |
Environment |
FS9100 systems |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
Compression |
HU01564 |
All |
Suggested
|
FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop
(show details)
Symptom |
None |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
FlashCopy |
HU01760 |
All |
Suggested
|
FlashCopy map progress appears to be stuck at zero percent
(show details)
Symptom |
None |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
FlashCopy |
HU01815 |
All |
Suggested
|
In Data Reduction Pools, volume size is limited to 96TB
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.2 |
Data Reduction Pools |
HU01851 |
All |
HIPER
|
When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or later using Deduplication |
Trigger |
Delete a deduplicated volume |
Workaround |
None |
|
8.2.0.1 |
Data Reduction Pools, Deduplication |
HU01913 |
All |
HIPER
|
A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Distributed RAID |
HU01758 |
All |
Critical
|
After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Power outage |
Workaround |
None |
|
8.2.0.0 |
RAID |
HU01848 |
All |
Critical
|
During an upgrade, systems with a large AIX VIOS setup may have multiple node warmstarts with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems presenting storage to large IBM AIX VIOS configurations |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
System Update |
IT25850 |
All |
Critical
|
I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Distributed RAID |
HU01661 |
All |
High Importance
|
A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v7.6 or later using remote copy |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
HyperSwap |
HU01733 |
All |
High Importance
|
Canister information, for the High Density Expansion Enclosure, may be incorrectly reported
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems using the High Density Expansion Enclosure (92F) |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Reliability Availability Serviceability |
HU01761 |
All |
High Importance
|
Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.1 or later with two or more storage pools |
Trigger |
Run multiple addmdisk commands to more than one storage pool at the same time |
Workaround |
Paced addmdisk commands to one storage pool at a time |
|
8.2.0.0 |
Backend Storage |
HU01797 |
All |
High Importance
|
Hitachi G1500 backend controllers may exhibit higher than expected latency
(show details)
Symptom |
Performance |
Environment |
Systems with Hitachi G1500 backend controllers |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Backend Storage |
HU00921 |
All |
Suggested
|
A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
|
HU01276 |
All |
Suggested
|
An issue in the handling of debug data from the FC adapter can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Reliability Availability Serviceability |
HU01523 |
All |
Suggested
|
An issue with FC adapter initialisation can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Reliability Availability Serviceability |
HU01571 |
All |
Suggested
|
An upgrade can become stalled due to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems undergoing a code upgrade |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
System Update |
HU01657 |
SVC, V5000, V7000 |
Suggested
|
The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Reliability Availability Serviceability |
HU01667 |
All |
Suggested
|
A timing-window issue, in the remote copy component, may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU01719 |
All |
Suggested
|
Node warmstart due to a parity error in the HBA driver firmware
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.6 and later using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Reliability Availability Serviceability |
HU01737 |
All |
Suggested
|
On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
Select a valid code image in the "Run Update Test Utility" dialog and click "Test" button |
Workaround |
Do not select a valid code image in the "Test utility" field of the "Run Update Test Utility" dialog |
|
8.2.0.0 |
System Update |
HU01765 |
All |
Suggested
|
Node warmstart may occur when there is a delay to I/O at the secondary site
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU01786 |
All |
Suggested
|
An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log
(show details)
Symptom |
None |
Environment |
Systems running v7.7.1 or later with SSDs |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Drives |
HU01791 |
All |
Suggested
|
Using the chhost command will remove stored CHAP secrets
(show details)
Symptom |
Configuration |
Environment |
Systems using iSCSI |
Trigger |
Run the "chhost -gui -name <host name> <host id>" command after configuring CHAP secret |
Workaround |
Set the CHAP secret whenever changing the host name |
|
8.2.0.0 |
iSCSI |
HU01807 |
All |
Suggested
|
The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Use the local WWPN and reference the node in lsportfc to get the correct information |
|
8.2.0.0 |
Command Line Interface |
HU01811 |
All |
Suggested
|
DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Distributed RAID |
HU01817 |
All |
Suggested
|
Volumes used for vVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings
(show details)
Symptom |
Configuration |
Environment |
Systems using vVols or TCT |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
FlashCopy |
HU01856 |
All |
Suggested
|
A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Data Reduction Pools |
HU02028 |
All |
Suggested
|
An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
IT19561 |
All |
Suggested
|
An issue with register clearance in the FC driver code may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using 16Gb HBAs |
Trigger |
None |
Workaround |
None |
|
8.2.0.0 |
Reliability Availability Serviceability |