Release Note for systems built with IBM Storage Virtualize


This is the release note for the 9.1.0 release and details the issues resolved between 9.1.0.0 and 9.1.0.5. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 9 April 2026.

  1. What's new in 9.1.0
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Concurrent Compatibility and Code Cross Reference in the Useful Links section.


1. What's new in 9.1.0

The following new features and enhancements have been introduced in the 9.1.0.0 release:

The following features were first introduced in Non-LTS release 8.7.3:

The following features were first introduced in Non-LTS release 8.7.2:

The following features were first introduced in Non-LTS release 8.7.1:

The following features are no longer supported on 8.7.1.0 and later:


2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

When creating a pool in the GUI, the option for a Data Reduction Pool is no longer shown by default.

If required, this can be enabled by checking Settings: → GUI Preferences → Advanced Pool Settings.

8.7.3.0

The following restrictions were valid but have now been lifted

Systems with FCM firmware 4_3_3 or late could not upgrade to 8.7.3.0 - 9.1.0.2 inclusive.

These systems should upgrade to 9.1.0.3 or later, which contained the fix for this issue.

This restriction has been lifted by SVAPAR-190431.

8.7.3.0

Systems running 9.1.0.x and 9.1.1.x required port 8443 to be opened between partnered systems when setting up partnerships for policy-based replication (asynchronous or high-availability) or partition migration.

This port needed to be opened in addition to the ports described in the Product documentation here

This restriction has been lifted by SVAPAR-184208.

9.1.0.0

A node could go offline with node error 566, due to excessive logging related to DIMM errors.

This restriction has been lifted by SVAPAR-176238.

9.1.0.0

Multiple node warmstarts could cause loss of access to data after upgrade to 8.7.2 or later, on a system that was once an AuxFar site in a 3-site replication configuration. This was due to invalid FlashCopy configuration state after removal of 3-site replication with HyperSwap or Metro Mirror, and did not apply to 3-site policy-based replication.

This restriction has been lifted by SVAPAR-175807.

9.1.0.0

Upgrade from 9.1.0.0 or 9.1.0.1, to 9.1.1.0 or later, was not supported if host clusters existed, or a partition was associated with a management portset.

This restriction has been lifted by SVAPAR-183577.

9.1.0.0

On FS5015 and FS5035 running 9.1.0.0 and 9.1.0.1, volumes could not be expanded using the GUI or the 'chvolume -size' CLI command.

This restriction has been lifted by SVAPAR-179296.

9.1.0.0

IBM SAN Volume Controller Systems with multiple I/O groups were not supported.

This restriction has been lifted for long-term support release 9.1.0.

8.7.1.0


3. Issues Resolved

This release contains all of the fixes included in the 8.7.0.6 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2025-38550 7256751 9.1.0.3
CVE-2025-38471 7256751 9.1.0.3
CVE-2025-38718 7256751 9.1.0.3
CVE-2025-39682 7256751 9.1.0.3
CVE-2025-47268 7250955 9.1.0.2
CVE-2025-4373 7250955 9.1.0.2
CVE-2024-12133 7250955 9.1.0.2
CVE-2025-48964 7250955 9.1.0.2
CVE-2024-12243 7250955 9.1.0.2
CVE-2025-32988 7250963 9.1.0.2
CVE-2025-32989 7250963 9.1.0.2
CVE-2025-36118 7250954 9.1.0.2

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
SVAPAR-198196 All HIPER Cisco Duo multi-factor authentication may fail, due to a change in which certificates are accepted by the server. (show details)
Symptom Configuration
Environment Systems using Cisco Duo multi-factor authentication
Trigger None
Workaround Install a patch, available from IBM support.
9.1.0.5 Command Line Interface, Graphical User Interface
SVAPAR-189679 FS7200, FS7300 Critical On FS7200 and FS7300, a PCIe timeout may cause node warmstarts or fibre channel port re-initialization, potentially leading to loss of access to data. (show details)
Symptom Loss of Access to Data
Environment Flashsystem 7200 and 7300 with fibre channel adapters
Trigger None
Workaround None
9.1.0.5 Fibre Channel
SVAPAR-191534 All Critical The RAID background parity scrub process might stall or run very slowly on arrays using NVMe drives (show details)
Symptom None
Environment Systems with NVMe drives, running software levels 8.6.2-9.1.1.
Trigger Scrub process encounters a certain data pattern on unallocated extents on the array
Workaround None
9.1.0.5 RAID
SVAPAR-192675 All Critical FS5300 node canisters upgrading to 8.7.0.10, 9.1.0.4 or 9.1.1.2 may fail to boot, causing a 1073 error to appear in the eventlog. (show details)
Symptom Loss of Redundancy
Environment FS5300
Trigger Upgrading to 8.7.0.10, 9.1.0.4 or 9.1.1.2
Workaround None
9.1.0.5 System Update
SVAPAR-197291 FS5000 Critical Adding a disaster recovery link to a partition on a system with insufficient memory may trigger repeating node warmstarts. (show details)
Symptom Multiple Node Warmstarts
Environment Low memory systems
Trigger Adding a disaster recovery link to a partition on a system with insufficient memory.
Workaround None
9.1.0.5 Storage Partitions
SVAPAR-186752 All High Importance RAID array expansion may take much longer than expected, due to an internal data migration bottleneck. This is more likely to happen on systems with large numbers of thin volumes. (show details)
Symptom Performance
Environment Systems with large numbers of thin volumes
Trigger RAID array expansion
Workaround None
9.1.0.5 RAID
SVAPAR-191281 FS5300 HIPER Loss of access to data for FlashSystem 5300 systems running 8.7.0.9, 9.1.0.3 or 9.1.1.1 (show details)
Symptom Loss of Access to Data
Environment FlashSystem 5300s upgrading to or running 8.7.0.9, 9.1.0.3 or 9.1.1.1
Trigger None
Workaround None
9.1.0.4 System Update
SVAPAR-180211 All Critical On systems with High Speed Ethernet partnerships, network connectivity issues may result in a multiple node reboots and loss of access to data. (show details)
Symptom Loss of Access to Data
Environment Systems using High Speed Ethernet partnerships.
Trigger Network issues
Workaround None
9.1.0.3 IP Replication
SVAPAR-181568 All Critical I/O timeouts may lead to node warmstarts, if a volume is added to a volume group which is using policy-based replication, and has a DR test active. (show details)
Symptom Multiple Node Warmstarts
Environment Systems using policy-based replication
Trigger Starting a DR test, then adding a volume to the volume group before the DR test ends.
Workaround None
9.1.0.3 Policy-based Replication
SVAPAR-184550 All Critical In systems using 64Gb fibre-channel adapters, a node warmstart or port resets may occur during SAN fabric disturbance or maintenance. If multiple nodes are affected by the SAN disturbance, this may cause a loss of access. (show details)
Symptom Loss of Access to Data
Environment Systems using 64Gb fibre-channel adapters.
Trigger SAN fabric disturbance or maintenance.
Workaround None.
9.1.0.3 Fibre Channel
SVAPAR-184890 All Critical Loss of access to data if the user specifies an invalid location when migrating a partition to a new storage system. (show details)
Symptom Loss of Access to Data
Environment Systems using partition migration
Trigger Running the chpartition command with the -location flag
Workaround None
9.1.0.3 Policy-based High availability, Storage Partitions
SVAPAR-187529 All Critical Node Asserts caused by an unsupported number of NVMe over FC hosts logging in to the node canister. (show details)
Symptom Multiple Node Warmstarts
Environment System using NVMe over FC with 8.7.3 or later
Trigger New Host logging in to the system pushing the system above the supported limit
Workaround None
9.1.0.3 NVMe Hosts
SVAPAR-190010 All Critical Recovery actions for SAS drive issues may cause multiple node warmstarts and loss of access to data, on 9.1.0 or 9.1.1. (show details)
Symptom Loss of Access to Data
Environment Systems running 9.1.0 or 9.1.1 software with SAS drives
Trigger Drive actions such as formatting, or marking drive-related errors as fixed
Workaround None
9.1.0.3 Drives
SVAPAR-190371 All Critical Node warmstarts may occur during upgrade to 8.7.0.8, 9.1.0.2 or 9.1.1.0 on systems using ethernet vlans, causing loss of access to data. (show details)
Symptom Loss of Access to Data
Environment Systems using vlans
Trigger Upgrade to 8.7.0.8, 9.1.0.2 or 9.1.1.0.
Workaround None
9.1.0.3 iSCSI
HU02477 FS5000, FS5200, FS7200, FS7300, FS9200, FS9500, SVC High Importance A node warmstart may occur when a system using IP replication experiences network instability. (show details)
Symptom Single Node Warmstart
Environment Systems configured with IP partnership, IP Replication failover
Trigger Unstable network between partnered nodes
Workaround None
9.1.0.3 IP Replication
SVAPAR-172540 All High Importance Single node warmstart after loss of connection to a remote cluster when using secured IP partnerships (show details)
Symptom Single Node Warmstart
Environment Systems using Secured IP partnerships for replication between systems
Trigger Link connection failure
Workaround None
9.1.0.3 IP Replication
SVAPAR-172706 All High Importance Upon upgrade to 9.1.0.0 the systems using self signed default certificates may experience authentication issues with peer clusters (show details)
Symptom Error in Error Log
Environment Clusters configured with PBR
Trigger Upgrade to 9.1.0.0 or to the next code levels
Workaround Manual exchange of root certificates or removing new type of certificate on systems upgraded to 9.1.0.0
9.1.0.3 FlashSystem Grid, Policy-based High availability, Policy-based Replication, Storage Partitions
SVAPAR-178667 All High Importance Node warmstarts caused by hung NVMe Compare and Write commands. (show details)
Symptom Multiple Node Warmstarts
Environment Systems using NVMe hosts, especially VMWare hosts.
Trigger Failed transfer during Compare phase.
Workaround None.
9.1.0.3 NVMe Hosts
SVAPAR-180424 All High Importance PBHA IP quorum application does not work on 9.1 when using externally-signed or self-signed system certificate and internal_communication certificate (show details)
Symptom Error in Error Log
Environment Systems using Policy-based High availability
Trigger Deploying an IP quorum application on systems with an externally-signed or self-signed certificate
Workaround Install full certificate chain in the system certificate of each system or add the full certificate chain of each system in the truststore of the respective partner system. Afterwards create a new IP quorum application
9.1.0.3 Policy-based High availability
SVAPAR-183978 All High Importance Policy-based HA or replication partnerships between systems running 9.1 and 8.7.0 may not work correctly due to a internal firewall issue. (show details)
Symptom Configuration
Environment Policy-based HA or replication partnerships between systems running 9.1 and above and 8.7.0.7 and above.
Trigger None.
Workaround None.
9.1.0.3 Policy-based High availability, Policy-based Replication
SVAPAR-186006 All High Importance Some 64Gb Fibre Channel SFPs with FRU PN 78P7709 are not recognised by the adapter, preventing the port from being used. (show details)
Symptom Loss of Redundancy
Environment Systems with 64Gb FC adapters
Trigger None
Workaround None
9.1.0.3 Fibre Channel
SVAPAR-186062 All High Importance Snapshots are not populated in refresh from snapshots grid when volume group ID is higher than 127 (show details)
Symptom Configuration
Environment Create volume group with id higher than 127
Trigger Refresh the VG from snapshot
Workaround None
9.1.0.3 Graphical User Interface
SVAPAR-187071 All High Importance If a snapshot is taken while the production volume is offline, I/O requests may not be processed correctly, leading to I/O timeouts and multiple node warmstarts. (show details)
Symptom Multiple Node Warmstarts
Environment Systems using volume group snapshots
Trigger Snapshot taken while the production volume is offline.
Workaround None
9.1.0.3 Snapshots
SVAPAR-187237 All High Importance A failed attempt to create a system certificate can trigger a node warmstart, due to incomplete error handling. (show details)
Symptom Single Node Warmstart
Environment Systems running 9.1.0 or 9.1.1
Trigger System certificate creation
Workaround None
9.1.0.3 Reliability Availability Serviceability
SVAPAR-187600 All High Importance A 2100 error may occur on 9.1.0 or later systems when a partnered system upgrades from below 9.1.0 to 9.1.0 or higher, temporarily blocking replication configuration during the upgrade. (show details)
Symptom Error in Error Log
Environment Systems running 9.1.0 with policy-based replication or HA
Trigger Upgrade
Workaround Stop the partnership for any system that is upgrading from pre-9.1.0 to 9.1.0 or above.
9.1.0.3 Policy-based Replication
SVAPAR-187637 All High Importance In rare situations, the system may temporarily misinterpret the state of a remote host cluster object. This may result in an internal recovery attempt and the reporting of a 4100 error, without any customer-initiated action. (show details)
Symptom Error in Error Log
Environment System configured with policy-based high availability and host clusters
Trigger None
Workaround None
9.1.0.3 Policy-based High availability
SVAPAR-190431 All High Importance Offline Nodes or Error code 2100 caused by Ransomware Threat Detection automatic data collection (show details)
Symptom Loss of Redundancy
Environment Systems with Ransomware Threat Detection enabled
Trigger None
Workaround None
9.1.0.3 Drives
SVAPAR-179505 All Suggested If the GUI is used to add a copy to a volume, and the pool has a compressed + deduplicated provisioning policy, the operation fails with a CMMVC5707E error. (show details)
Symptom None
Environment Affects any 8.7 and 9.1 release if compressed + deduplicated provisioning policy is set
Trigger Attempt to use GUI to add volume copy to a volume and target pool has compressed + deduplicated provisioning policy is set
Workaround Use "addvolumecopy" CLI command
9.1.0.3 Graphical User Interface
SVAPAR-184208 All Suggested Partnership setup using the management GUI fails, if the partnered systems cannot communicate via port 8443 (show details)
Symptom Configuration
Environment Systems that need to be in a partnership
Trigger Setting up a partnership via the management GUI
Workaround Open port 8443 between the two systems in the firewall.
9.1.0.3 Policy-based High availability, Policy-based Replication
SVAPAR-185861 All Suggested Systems using 3-site replication (HA+DR) can experience a warmstart when the site link configuration changes, while some IO is outstanding. (show details)
Symptom Single Node Warmstart
Environment Systems using 3-site HA+DR
Trigger A slow link to the DR site
Workaround none
9.1.0.3 Policy-based High availability
SVAPAR-186575 All Suggested A 4110 error may be raised if the REST API authentication token expires and the REST API is receiving a large number of requests (show details)
Symptom Error in Error Log
Environment Systems using Policy-based replication, Policy-based High Availability or FlashSystem grid.
Trigger REST API token expires
Workaround none
9.1.0.3 FlashSystem Grid, Policy-based High availability, Policy-based Replication
SVAPAR-186698 All Suggested Snaps collected from Storage Insights do not contain enclosure dumps and battery snaps (show details)
Symptom None
Environment Systems with cloud callhome enabled
Trigger Collecting a snap remotely
Workaround Collect a snap using the GUI or CLI
9.1.0.3 Support Data Collection
SVAPAR-187532 All Suggested The GUI does not offer the capacity-optimized option when linking pools for PBHA, if child pools are in use. (show details)
Symptom Configuration
Environment Systems running 9.1.0 with PBHA
Trigger Configuring PBHA with child pools
Workaround Use CLI to select capacity_optimized
9.1.0.3 Policy-based High availability
SVAPAR-188555 All Suggested System has invalid internal communication certificate or invalid text representation of system certificate (show details)
Symptom Configuration
Environment Systems with internal communication certificates or externally signed system certificates
Trigger Automatic creation of internal communication certificate on 9.1 or installation of externally signed system certificate
Workaround Re-install the certificates
9.1.0.3 No Specific Feature
SVAPAR-181640 All HIPER After expanding a volume that is being asynchronously replicated, data written to the recently expanded region of the disk may not get replicated to the remote site if the replication is running in low bandwidth mode. This can lead to an undetected data loss at the DR site. (show details)
Symptom Data Integrity Loss
Environment Systems using asynchronous PBR
Trigger Expansion of a volume that is performing asynchronous replication
Workaround None
9.1.0.2 Policy-based Replication
SVAPAR-173858 All Critical Expanding a production volume that is using asynchronous replication may trigger multiple node warmstarts and an outage on the recovery system. (show details)
Symptom Loss of Access to Data
Environment System running software version 9.1.0 and using an asynchronous disaster recovery replication policy.
Trigger Expanding a volume in a volume group with an asynchronous disaster recovery replication policy.
Workaround None
9.1.0.2 Policy-based Replication
SVAPAR-173936 All Critical A timing window can lead to a resource leak in the thin-provisioning component. This can lead to higher volume response times, and eventually a node warmstart caused by an I/O timeout. (show details)
Symptom Performance
Environment Systems using thin-provisioned volumes.
Trigger None
Workaround Warmstarting the affected node will clear the issue.
9.1.0.2 Thin Provisioning
SVAPAR-175807 All Critical Multiple node warmstarts may cause loss of access to data after upgrade to 8.7.2 or later, on a system that was once an AuxFar site in a 3-site replication configuration. This is due to invalid FlashCopy configuration state after removal of 3-site replication with HyperSwap or Metro Mirror, and does not apply to 3-site policy-based replication. (show details)
Symptom Loss of Access to Data
Environment Any system that was previously an AuxFar site in a 3-site replication configuration.
Trigger None
Workaround None
9.1.0.2 3-Site using HyperSwap or Metro Mirror
SVAPAR-176238 All Critical A node may go offline with node error 566, due to excessive logging related to DIMM errors. (show details)
Symptom Loss of Redundancy
Environment Systems running 9.1.0 software
Trigger Logging of correctable DIMM errors
Workaround None
9.1.0.2 Reliability Availability Serviceability
SVAPAR-177639 All Critical Deletion of volumes in 3-site (HA+DR) replication may cause multiple node warmstarts. This can only occur if the volume previously used 2-site asynchronous replication, and was then converted to 3-site (HA+DR). (show details)
Symptom Loss of Access to Data
Environment FlashSystem 7300 and C200 systems using 3-site replication
Trigger Deletion of a volume that previously used 2-site asynchronous replication
Workaround None
9.1.0.2 Policy-based High availability, Policy-based Replication
SVAPAR-178250 All Critical Node warmstarts may be triggered by a race condition during NVMe host reset, if the host is using Compare and Write commands. This can cause a loss of access to data. (show details)
Symptom Loss of Access to Data
Environment Systems using NVMe hosts
Trigger None
Workaround None
9.1.0.2 NVMe Hosts
SVAPAR-178402 All Critical Multiple node warmstarts may occur when there are a high number of errors on the fibre channel network. (show details)
Symptom Multiple Node Warmstarts
Environment All systems using fibre channel connectivity.
Trigger A high number errors on the fibre channel network.
Workaround None
9.1.0.2 Fibre Channel
SVAPAR-179030 All Critical The CIMOM configuration interface is no longer supported in 9.1.0. Attempting to manually restart the cimserver service may cause a node warmstart, and loss of configuration access. (show details)
Symptom Configuration
Environment Systems running 9.1.0
Trigger Attempting to restart the cimserver service
Workaround None
9.1.0.2 Command Line Interface
SVAPAR-179930 All Critical Node warmstarts when backend IO is active on a fibre channel login that experiences a logout on code level 9.1.0.0 or 9.1.0.1. (show details)
Symptom Multiple Node Warmstarts
Environment Fibre channel logins between nodes and backend storage or fibre channel logins between nodes. Code level 9.1.0.0 or 9.1.0.1.
Trigger None
Workaround None
9.1.0.2 Backend Storage, Fibre Channel
SVAPAR-180838 All Critical Multiple node warmstarts on systems running 9.1 if High Availability (PBHA) is added to an existing asynchronous replication (PBR) setup and the volumes are using persistent reserves. (show details)
Symptom Multiple Node Warmstarts
Environment 3-Site Replication (PBHA + PBR DR) on Systems running 9.1
Trigger Adding Policy based High Availability to an existing asynchronous replication configuration. This includes using HA to migrate a volume to another FlashSystem
Workaround None
9.1.0.2 Policy-based High availability, Policy-based Replication
SVAPAR-169255 All High Importance High peak response times when adding snapshots for a volume group containing mirrored vdisks. (show details)
Symptom Performance
Environment Stretched cluster/mirrored vdisks and the volume group snapshot feature.
Trigger Add a snapshot for a volume group containing mirrored vdisks.
Workaround None
9.1.0.2 Performance, Snapshots, Volume Mirroring
SVAPAR-177127 All High Importance The system may fail to install an externally signed system certificate via the GUI. (show details)
Symptom Configuration
Environment All systems using system software version 9.1.0.0 or later
Trigger Installing an externally signed system certificate.
Workaround Use the CLI to install the certificate.
9.1.0.2 Encryption, Graphical User Interface
SVAPAR-178258 & SVAPAR-178262 All High Importance System may experience performance issue when configured with ISCSI host (show details)
Symptom Performance
Environment System using ISCSI Hosts.
Trigger None
Workaround None
9.1.0.2 Performance, iSCSI
SVAPAR-179128 All High Importance A single-node warmstart may occur on a system using policy-based replication or HA, due to a timing window triggered by disconnection of a partnership. (show details)
Symptom Single Node Warmstart
Environment Systems using policy-based replication or policy-based High Availability
Trigger Disconnection of a partnership while messages are in flight between the two systems
Workaround None
9.1.0.2 Policy-based Replication
SVAPAR-179296 All High Importance In 9.1.0.0 and 9.1.0.1, the "chvolume -size" command has no effect on FS5015 and FS5035. This prevents GUI volume resizing from working correctly. (show details)
Symptom Configuration
Environment IBM Storage FlashSystem 5015 and 5035.
Trigger Resizing a volume using the "chvolume -size" CLI command or the GUI
Workaround Use the expandvdisksize command to expand the provisioned capacity of a volume by a specified amount.
9.1.0.2
SVAPAR-181556 All High Importance Configuration backups and support data collection (snap) can fail on systems running 9.1 if there are any invalid UTF-8 characters in the login banner (show details)
Symptom Configuration
Environment Systems with a login banner configured
Trigger None
Workaround Re-configure the banner to use valid UTF-8 characters
9.1.0.2 Reliability Availability Serviceability
SVAPAR-182188 All High Importance A single node warmstart may occur when the Ransomware Threat Detection process stops functioning (show details)
Symptom Single Node Warmstart
Environment Systems using Ransomware Threat Detection
Trigger None
Workaround None
9.1.0.2 Reliability Availability Serviceability
SVAPAR-182909 All High Importance FlashSystem CLI, GUI and REST API are inaccessible on releases 9.1.0.0 and 9.1.0.1 (show details)
Symptom Configuration
Environment Code level 9.1.0.0/9.1.01 with Policy-based High Availability and Highly Available snapshots configured.
Trigger The creation of a highly available snapshot
Workaround Warmstart the config node.
9.1.0.2 Command Line Interface, Graphical User Interface, Policy-based High availability, REST API
SVAPAR-139491 All Suggested VMWare hosts attached via NVMe may log errors related to opcode 0x5 (show details)
Symptom Configuration
Environment Systems with NVMe hosts.
Trigger None.
Workaround None.
9.1.0.2 NVMe
SVAPAR-170958 All Suggested Policy-based asynchronous replication may not correctly balance the available bandwidth between nodes after a node goes offline, potentially causing a degradation of the recovery point (show details)
Symptom Performance
Environment Systems using asynchronous policy-based replication
Trigger Node goes offline
Workaround Warmstart affected nodes
9.1.0.2 Policy-based Replication
SVAPAR-173310 All Suggested If both nodes in an IO Group go down unexpectedly, invalid snapshots may remain in the system which cannot be removed (show details)
Symptom Configuration
Environment Systems using volume group snapshots.
Trigger Both nodes in an IO Group go down unexpectedly.
Workaround None.
9.1.0.2 Snapshots
SVAPAR-175855 All Suggested A new volume may incorrectly show a source_volume_name and source_volume_id, when it inherits the vdisk ID of a deleted clone volume. (show details)
Symptom Configuration
Environment Systems using volume group snapshots and clones
Trigger Create a snapshot of a VG.
Create a clone from the snapshot.
Delete the clone.
Create a new volume.
Workaround None.
9.1.0.2 Snapshots
SVAPAR-177359 All Suggested Users may need to log out of iSCSI sessions individually, as simultaneous logout is not supported. (show details)
Symptom None
Environment iSCSI hosts
Trigger iSCSI logout processing
Workaround None
9.1.0.2 iSCSI
SVAPAR-177771 All Suggested Encryption with internal key management may be unable to perform the scheduled daily re-key of the internal key. The event log will show a daily repeating information event when this occurs. The current internal recovery key will continue to function. (show details)
Symptom Configuration
Environment Systems using software version 9.1.0.x and encryption with internal key management.
Trigger A node reboot or warmstart with the affected configuration.
Workaround None
9.1.0.2 Encryption
SVAPAR-178208 All Suggested A race condition in I/O processing for NVMe over RDMA/TCP hosts may lead to a single node warmstart. (show details)
Symptom Single Node Warmstart
Environment Systems with NVMe over RDMA or TCP hosts attached.
Trigger None
Workaround None
9.1.0.2 NVMe Hosts
SVAPAR-178320 All Suggested When an invalid subject alternative name is entered for a mksystemcertstore command, the system returns CMMVC5786E The action failed because the cluster is not in a stable state. (show details)
Symptom Configuration
Environment All systems at 9.1.0 or higher.
Trigger Running the mksystemcertstore command with an invalid subject alternative name value.
Workaround None
9.1.0.2 Encryption
SVAPAR-178323 All Suggested The system may attempt to authenticate a LDAP user who is not in any remote user group with a null password. (show details)
Symptom Configuration
Environment Systems using LDAP authentication.
Trigger Attempting to sign into the system with a user who is not in any remote user group.
Workaround None
9.1.0.2 LDAP
SVAPAR-179874 All Suggested GUI displays old partition name after renaming. (show details)
Symptom Configuration
Environment This affects GUI's volume groups view, which continues to display the previous partition name (for volume groups associated with the renamed partition).
Trigger Renaming partition
Workaround None
9.1.0.2 Graphical User Interface
SVAPAR-183577 All Suggested Upgrade from 9.1.0.0 or 9.1.0.1, to 9.1.1.0 or later, is not supported if host clusters exist, or a partition is associated with a management portset. (show details)
Symptom Configuration
Environment Systems with host clusters, or partitions with a management portset.
Trigger Upgrade from 9.1.0.0 or 9.1.0.1, to 9.1.1.0 or later.
Workaround Upgrade via 9.1.0.2 or a later 9.1.0 PTF.
9.1.0.2 Policy-based High availability
SVAPAR-172745 All HIPER Systems using policy based high availability (PBHA) on 9.1.0.0 may experience a detected data loss after performing configuration changes and multiple failovers of the active management system. (show details)
Symptom Data Integrity Loss
Environment Systems running 9.1.0.0 with PBHA configured
Trigger Failing over the AMS to the other system
Workaround None
9.1.0.1 Policy-based High availability
SVAPAR-172478 All Suggested Systems running 9.1.0.0 may incorrectly report 1585 "Could not connect to DNS server" errors (show details)
Symptom Error in Error Log
Environment System with DNS configured
Trigger None
Workaround Register the cluster IP address in all configured DNS servers
9.1.0.1 No Specific Feature

4. Useful Links

Description Link
Product Documentation
Concurrent Compatibility and Code Cross Reference
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Configuration Limits
IBM Storage Virtualize Policy-based replication and High Availability Compatibility Cross Reference
Concurrent Compatibility and Code Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning
IBM Storage Virtualize IP quorum application requirements