Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 8.1 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.1.0.0 and 8.1.0.2. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 10 September 2021.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 8.1 release: The following new feature has been introduced in the 8.1.0.2 release:

2. Known Issues and Restrictions

Note: For clarity, the term "node" will be used to refer to a SAN Volume Controller node or Storwize system node canister.
Details Introduced
Customers using AE1 and AE2 enclosures, with FlashSystem code v1.5.x, behind SVC or V840 systems should not upgrade to v8.1.0.2. 8.1.0.2

FlashSystem 840 systems running with an array created on firmware prior to v1.2.x.x do not support SCSI UNMAP or WRITE SAME with Unmap commands. Support for these commands was recently added in v8.1.0.2. However this PTF does not correctly identify 840 arrays created on these earlier firmware versions. Customers, with FlashSystem 840 backends, should not upgrade their SVC systems to v8.1.0.2 until the proper checks are complete.

The issue can be avoided by disabling unmap and ask IBM Remote Technical Support for an action plan to make new arrays that support unmap.

8.1.0.2

When configuring Remote Support Assistance, the connection test will report a fault and opening a connection will report Connected, followed shortly by Connection failed.

Even though it states "Connection Failed", a connection may still be successfully opened.

This issue will be resolved in a future release

8.1.0.1

Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB.

Under some circumstances it may also be necessary to remove and re-add each node in turn.

8.1.0.0

RSA is not supported with IPv6 service IP addresses.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

Customers with attached hosts running zLinux should not upgrade to v8.1.

This is a temporary restriction that will be lifted in a future PTF.

8.1.0.0

AIX operating systems will not be able to get full benefit from the hot spare node feature unless they have the dynamic tracking feature enabled (dyntrk).

8.1.0.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Spectrum Virtualize as Software customers should not enable the Transparent Cloud Tiering function.

This restriction will be removed under APAR HU01495.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0

Priority Flow Control for iSCSI is only supported on Brocade VDX 10GbE switches.

7.7.0.0

It is not possible to replace the mid-plane in a SVC 12F SAS expansion enclosure.

If a SVC 12F mid-plane must be replaced then a new enclosure will be provided.

7.7.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

V5010 & V5020 systems with 8GB RAM will not be able to upgrade to v8.1.

This issue has been resolved in PTF v8.1.0.1.

8.1.0.0

Systems using HyperSwap should not be upgraded to v8.1.

This was a temporary restriction that has been lifted.

8.1.0.0

Systems with VMware ESXi (all versions) hosts attached using FCoE should not be upgraded to v8.1.

This was a temporary restriction that has been lifted.

8.1.0.0

Customers with attached hosts running AIX 6.x or Solaris (with DMP) should not upgrade to v8.1.

This was a temporary restriction that has been lifted.

8.1.0.0

Systems using Keyserver Encryption with IPv6 should not be upgraded to v8.1.

This was a temporary restriction that has been lifted.

8.1.0.0

3. Issues Resolved

This release contains all of the fixes included in the 7.8.1.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2017-1710 ssg1S1010788 8.1.0.1
CVE-2016-0634 ssg1S1012278 8.1.0.0

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
HU01706 All HIPER Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash (show details)
Symptom Incorrect data read from volume
Environment Systems running 7.7.1.7 or 7.8.1.3
Trigger See Flash
Workaround None
8.1.0.2
HU01665 All HIPER In environments where backend controllers are busy the creation of a new filesystem, with default settings, on a Linux host under conditions of parallel workloads can overwhelm the capabilities of the backend storage MDisk group and lead to warmstarts due to hung I/O on multiple nodes (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v8.1 or later
Trigger mkfs with default settings
Workaround Throttle unmap commands using the offloaded I/O throttle to prevent the system being overloaded prior to issuing mkfs with discard mkthrottle -type offload -bandwidth xxxx https://www.ibm.com/support/knowledgecenter/en/STPVGU_7.8.1/com.ibm.storage.svc.console.781.doc/svc_mkthrottle.html OR Create the file system without the capability to discard (unmap): mkfs with -E nodiscards flag
8.1.0.1 Hosts
HU01670 All HIPER Enabling RSA without a valid service IP address may cause multiple node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems using SRA
Trigger Enabling RSA without a valid service IP address
Workaround Configure an IPv4 service IP address for each node
8.1.0.1 RSA
IT22802 V7000, V5000 HIPER A memory management issue in cache may cause multiple node warmstarts possibly leading to a loss of access and necessitating a Tier 3 recovery (show details)
Symptom Loss of Access to Data
Environment Storwize V7000 Gen2 and V5030 systems with 32 GB of RAM
Trigger None
Workaround After upgrade to v8.1.0.0 perform node remove/add on each node in turn to prevent the problem occurring
8.1.0.1 Cache
IT23034 All Critical With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery (show details)
Symptom Multiple Node Warmstarts
Environment Systems with HyperSwap volumes and mirrored copies at a single site
Trigger Using rmvolumecopy to remove a copy from a HyperSwap auxiliary volume
Workaround None
8.1.0.1 HyperSwap
HU01700 All High Importance If a thin-provisioned or compressed volume is deleted, and another volume is immediately created with the same real capacity, warmstarts may occur (show details)
Symptom Multiple Node Warmstarts
Environment Systems using space-efficient or compressed volumes
Trigger Remove a SE or compressed volume then, immediately, create a volume of the same real capacity
Workaround None
8.1.0.1 Compression, Thin Provisioning
HU01673 All Suggested GUI rejects passwords that include special characters (show details)
Symptom None
Environment Systems running v8.1.0.0
Trigger Enter password with special characters in the GUI
Workaround Use passwords without special characters
8.1.0.1 Graphical User Interface
HU01239 & HU01255 & HU01586 All HIPER The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access (show details)
Symptom Loss of Access to Data
Environment Systems with 16Gb HBAs
Trigger Faulty SAN hardware (adapter, SFP, switch)
Workaround None
8.1.0.0 Reliability Availability Serviceability
HU01646 All HIPER A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster (show details)
Symptom Loss of Access to Data
Environment Systems with 16Gb HBAs
Trigger Faulty SAN hardware (adapter; SFP; switch)
Workaround None
8.1.0.0 Reliability Availability Serviceability
HU01940 All HIPER Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low (show details)
Symptom Loss of Access to Data
Environment All systems
Trigger Change of drive use
Workaround None
8.1.0.0 Drives
HU01321 All Critical Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring (show details)
Symptom Multiple Node Warmstarts
Environment Systems using Remote Copy
Trigger Change the direction of a remote copy relationship
Workaround None
8.1.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01490 All Critical When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups (show details)
Symptom Loss of Access to Data
Environment Systems with iSCSI connected hosts
Trigger
  • A addhostport command with iqn2 and iqn1 (where iqn1 is already recorded) is entered;
  • This command attempts to add iqn2 but determines that iqn1 is a duplicate and the CLI command fails;
  • Later whenever a login request from iqn2 is received internal checking detects an inconsistency and warmstarts the node
Workaround Do not use multiple IQNs in iSCSI add/remove commands
8.1.0.0 iSCSI
HU01509 All Critical Where a drive is generating medium errors, an issue in the handling of array rebuilds can result in an MDisk group being repeated taken offline (show details)
Symptom Loss of Access to Data
Environment Systems running v7.6 or later
Trigger Drive with medium errors
Workaround None
8.1.0.0 RAID
HU01524 All Critical When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up (show details)
Symptom Loss of Access to Data
Environment Systems running v7.8.1 or later
Trigger None
Workaround None
8.1.0.0 RAID
HU01549 All Critical During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes (show details)
Symptom Loss of Access to Data
Environment Systems running v7.5 or earlier with iSCSI connected HyperV clustered hosts
Trigger Upgrade to v7.6 or later
Workaround None
8.1.0.0 System Update, iSCSI
HU01572 All Critical SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access (show details)
Symptom Loss of Access to Data
Environment Systems with iSCSI connected hosts
Trigger None
Workaround None
8.1.0.0 iSCSI
HU01583 All Critical Running mkhostcluster with duplicate host names or IDs in the seedfromhost argument will cause a Tier 2 recovery (show details)
Symptom Loss of Access to Data
Environment System running v7.7.1 or later
Trigger Run mkhostcluster with duplicate host names or IDs in the seedfromhost argument
Workaround Do not specify duplicate host names or IDs in the seedfromhost argument
8.1.0.0 Host Cluster
IC57642 All Critical A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide (show details)
Symptom Loss of Access to Data
Environment Systems with more than 2 nodes
Trigger None
Workaround None
8.1.0.0 Reliability Availability Serviceability
IT17919 All Critical A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.2 or later using Remote Copy
Trigger None
Workaround None
8.1.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01476 All High Importance A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed (show details)
Symptom Loss of Redundancy
Environment Systems using Remote Copy
Trigger Rename remote copy relationship using GUI or CLI
Workaround None
8.1.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01481 All High Importance A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 or later using HyperSwap
Trigger None
Workaround None
8.1.0.0 HyperSwap
HU01506 All High Importance Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment All systems
Trigger None
Workaround Do not use the -autodelete option when creating a volume copy
8.1.0.0 Volume Mirroring
HU01569 SVC High Importance When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes (show details)
Symptom Performance
Environment SVC systems using compression
Trigger High compression workloads
Workaround None
8.1.0.0 Compression
HU01579 All High Importance In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive (show details)
Symptom Loss of Redundancy
Environment Systems with drives of type HUSMM80xx0ASS20
Trigger Attempt to assign drive type as quorum
Workaround Manually assign a different drive type as quorum
8.1.0.0 Drives, Quorum
HU01584 All High Importance An issue in array indexing can cause a RAID array to go offline repeatedly (show details)
Symptom Offline Volumes
Environment System running v7.6 or later
Trigger None
Workaround Avoid doing member exchanges
8.1.0.0 RAID
HU01610 All High Importance The handling of the background copy backlog by FlashCopy can cause latency for other unrelated FlashCopy maps (show details)
Symptom Performance
Environment Systems using FlashCopy
Trigger None
Workaround Minimise the use of inter I/O group FlashCopy source and target volumes
8.1.0.0 FlashCopy
HU01614 All High Importance After a node is upgraded hosts defined as TPGS may have paths set to inactive (show details)
Symptom Loss of Redundancy
Environment Systems running v7.6 or earlier with host type TPGS
Trigger Upgrade to v7.7 or later
Workaround None
8.1.0.0 Hosts
HU01623 All High Importance An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships (show details)
Symptom Performance
Environment Systems running v7.1 or later using Remote Copy
Trigger None
Workaround None
8.1.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01626 All High Importance Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue (show details)
Symptom Loss of Redundancy
Environment Systems running v7.8 or later
Trigger Downgrade to v7.7.1 or earlier
Workaround None
8.1.0.0 System Update
HU01630 All High Importance When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 or earlier using FlashCopy
Trigger Upgrade to v7.6 or later
Workaround None
8.1.0.0 FlashCopy
HU01636 V5000 High Importance A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller (show details)
Symptom Performance
Environment Systems presenting storage to hosts with N2225 adapters
Trigger Host with N2225 adapters running Windows Server 2012R2
Workaround None
8.1.0.0 Hosts
HU01638 All High Importance When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v7.5 or earlier
Trigger Upgrade to v7.6 or later with a cluster in the same zone running v6.1 or earlier
Workaround Unzone any cluster running v5.1 or earlier from the cluster being upgraded
8.1.0.0 System Update
HU01697 All High Importance A timeout issue in RAID member management can lead to multiple node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment All systems
Trigger None
Workaround None
8.1.0.0 RAID
HU01346 All Suggested An unexpected error 1036 may display on the event log even though a canister was never physically removed (show details)
Symptom None
Environment All Storwize Gen 2 or later systems
Trigger None
Workaround None
8.1.0.0 System Monitoring
HU01385 All Suggested A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy (show details)
Symptom Single Node Warmstart
Environment Systems running v7.6 or v7.7 that are using Hyperswap
Trigger Issue an rmvolumecopy or rmrcrelationship command whilst hosts are still actively using the Hyperswap volume
Workaround Do not remove a Hyperswap volume or relationship whilst hosts are still mapped to it
8.1.0.0 HyperSwap
HU01396 All Suggested HBA firmware resources can become exhausted resulting in node warmstarts (show details)
Symptom Single Node Warmstart
Environment All systems using 16Gb HBAs
Trigger None
Workaround None
8.1.0.0 Hosts
HU01446 All Suggested Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.6.1 or later with VMware hosts using VAAI CAW feature
Trigger None
Workaround Avoid overloading the back-end
8.1.0.0 Hosts
HU01454 All Suggested During an array rebuild a quiesce operation can become stalled leading to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.7 or later
Trigger None
Workaround None
8.1.0.0 RAID, Distributed RAID
HU01457 V7000 Suggested In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI (show details)
Symptom None
Environment Systems running v7.7.1 or later
Trigger None
Workaround Performed required actions in the CLI
8.1.0.0 Graphical User Interface
HU01458 All Suggested A node warmstart may occur when hosts submit writes to Remote Copy secondary volumes (which are in a read-only mode) (show details)
Symptom Single Node Warmstart
Environment Systems using Remote Copy
Trigger Host write I/O to a read-only secondary volume
Workaround Stop hosts sending write I/O to Remote Copy secondary volumes
8.1.0.0 Metro Mirror, Global Mirror, Global Mirror with Change Volumes
HU01467 All Suggested Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools (show details)
Symptom None
Environment All systems
Trigger None
Workaround Increase the sampling interval
8.1.0.0 System Monitoring
HU01472 All Suggested A locking issue in Global Mirror can cause a warmstart on the secondary cluster (show details)
Symptom Single Node Warmstart
Environment Systems using Global Mirror
Trigger None
Workaround None
8.1.0.0 Global Mirror
HU01521 All Suggested Remote Copy does not correctly handle STOP commands for relationships which may lead to node warmstarts (show details)
Symptom Single Node Warmstart
Environment Systems using Remote Copy
Trigger None
Workaround None
8.1.0.0 Metro Mirror, Global Mirror, Global Mirror with Change Volumes
HU01522 All Suggested A node warmstart may occur when a Fibre Channel frame is received with an unexpected value for host login type (show details)
Symptom Single Node Warmstart
Environment Systems running v7.4 or later
Trigger Unexpected value for host login type in a Fibre Channel frame
Workaround Ensure that hosts are fully supported from an interop perspective and configured correctly so that they do not send an unexpected "host login type"
8.1.0.0 Hosts
HU01545 All Suggested A locking issue in the stats collection process may result in a node warmstart (show details)
Symptom Single Node Warmstart
Environment All systems
Trigger None
Workaround None
8.1.0.0 System Monitoring
HU01550 All Suggested Removing a volume with -force while it is still receiving I/O from a host may lead to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.6 or later
Trigger Using rmvdisk with -force
Workaround Enable volume protection
8.1.0.0
HU01554 All Suggested Node warmstart may occur during a livedump collection (show details)
Symptom Single Node Warmstart
Environment Systems with 8GB RAM
Trigger None
Workaround None
8.1.0.0 Support Data Collection
HU01556 All Suggested The handling of memory pool usage by Remote Copy may lead to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.3 or later
Trigger None
Workaround None
8.1.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01563 V7000 Suggested Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart (show details)
Symptom Single Node Warmstart
Environment Unified configurations
Trigger None
Workaround None
8.1.0.0
HU01573 All Suggested Node warmstart due to a stats collection scheduling issue (show details)
Symptom Single Node Warmstart
Environment Systems running v7.8 or later
Trigger None
Workaround None
8.1.0.0 System Monitoring
HU01582 All Suggested A compression issue in IP replication can result in a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.7 or later using IP Replication
Trigger None
Workaround None
8.1.0.0 IP Replication
HU01615 All Suggested A timing issue relating to process communication can result in a node warmstart (show details)
Symptom Single Node Warmstart
Environment All systems
Trigger None
Workaround None
8.1.0.0
HU01622 All Suggested If a Dense Draw enclosure is put into maintenance mode during an upgrade of the enclosure management firmware then further upgrades to adjacent enclosures will be prevented (show details)
Symptom Configuration
Environment Systems running v7.8 or later with Dense Draw enclosures
Trigger Putting a Dense Draw enclosure in maintenance mode during an upgrade
Workaround None
8.1.0.0 System Update
HU01631 All Suggested A memory leak in EasyTier when pools are in Balanced mode can lead to node warmstarts (show details)
Symptom Single Node Warmstart
Environment Systems running v7.8 or later using EasyTier
Trigger Pools in Balanced mode
Workaround Create hybrid pools with multiple tiers of storage and set Easy tier mode to Active
8.1.0.0 EasyTier
HU01653 All Suggested An automatic Tier 3 recovery process may fail due to a RAID indexing issue (show details)
Symptom Single Node Warmstart
Environment All systems
Trigger None
Workaround None
8.1.0.0 Reliability Availability Serviceability, RAID
HU01679 All Suggested An issue in the RAID component can very occasionally cause a single node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems running v7.7 or later
Trigger None
Workaround None
8.1.0.0 RAID
HU01704 All Suggested In systems using HyperSwap a rare timing window issue can result in a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using HyperSwap
Trigger None
Workaround None
8.1.0.0 HyperSwap
HU01729 All Suggested Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using Remote Copy
Trigger None
Workaround None
8.1.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01736 SVC Suggested A single node warmstart may occur when the topology setting of the cluster is changed (show details)
Symptom Single Node Warmstart
Environment SVC systems using enhanced stretched cluster or Hyperswap
Trigger Timing window between I/O requests and CLI command processing
Workaround None
8.1.0.0 HyperSwap
IT19387 V7000, V5000 Suggested When two Storwize I/O groups are connected to each other (via direct connect) 1550 errors will be logged and reappear when marked as fixed (show details)
Symptom None
Environment Storwize systems with direct connected I/O groups
Trigger Direct connected I/O groups
Workaround None
8.1.0.0 System Monitoring

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Supported Drive Types and Firmware Levels
SAN Volume Controller and Storwize Family Inter-cluster Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning