IBM System Storage SAN Volume Controller ++++++++++++++++++++++++++++++++++++++++ ------------------------------------------------------------------------------- CONTENTS 1. Introduction 2. Available Education 3. Pre-Requisites 4. Code Levels 5. Problems Resolved and New Features in this Build 6. Installation Instructions for New Clusters 7. Further Documentation 8. Known Issues and Restrictions in this Level 9. Maximum Configurations 10. Licensing Information ------------------------------------------------------------------------------- 1. Introduction This document describes how to install version 4.2.1.8 of the IBM System Storage SAN Volume Controller (2145) software. This release of software is a service release, it addresses APARs as detailed in Section 5. Please refer to the Recommended Software List and Supported Hardware List on the support website: http://www.ibm.com/storage/support/2145 ------------------------------------------------------------------------------- 2. Available Education A course on SAN Volume Controller Planning and Implementation is available. For further information or enrolment, contact your local IBM representative. ------------------------------------------------------------------------------- 3. Pre-Requisites Before installing this code level please check the following pre-requisites are met. Also please note the concurrent upgrade (upgrade with I/O running) restrictions. Check there are no unfixed errors in the error log or on the front panel. Use normal service procedures to resolve these errors before proceeding. Before upgrading the SVC cluster, we recommend using the SVC Software Upgrade test utility. See this page for more details: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 If you are upgrading from a code level below 4.2.1.0, it is important that you upgrade the SVC Console Code (GUI) before you install the SVC code. The latest level of the GUI is available from the same website as the SVC software. SVCC 4.2.1.xxx is not fully compatible with SVC versions earlier than 4.2.1.0. Therefore the new SVC Console should only be used with SVC clusters running previous versions of SVC code in order to upgrade to SVC V4.2.1.x Please see the known issues in section 6, as well as the details in the download document for the SVC Console for more details. For existing clusters please check which level your cluster is currently running before proceeding. Refer to the Configuration Guide (v4.2.1) - Chapter 7 ("Using the SAN Volume Controller Console"), section "Viewing Cluster Properties" to determine how to identify the code level currently installed on your cluster (Look for "Code Version"). If your cluster is at 3.1.0.5 or higher then please follow the upgrade instructions given in the Configuration Guide (v4.2.1) - Chapter 10 ("Upgrading the SAN Volume Controller software"), section "Upgrading the SAN Volume Controller software using the SAN Volume Controller Console". If you are running a version of SVC software older than 3.1.0.5 then you will have to perform multiple upgrades to install SVC 4.2.1.8 concurrently on your cluster. Please see the following web page for the full upgrade compatibility matrix. http://www.ibm.com/storage/support/software/sanvc/code If you are installing a new SVC Cluster then you will need to follow the procedure in section 6. This will upgrade the SVC cluster to version 4.2.1.8 before the cluster is configured for use. Please note the warning at the beginning of the process and only proceed if you are sure there is no data on this cluster. ------------------------------------------------------------------------------- 4. Code Levels 2145 Software: 2145 Release Level 4.2.1.8 (7.8.0807180000) ------------------------------------------------------------------------------- 5. Problems Resolved and New Features in this Build New features in previous 4.2.x releases: New features in SVC 4.2.1.7: Support for Microsoft Windows Server 2008 New service features in SVC 4.2.1.6: Boot error code 250 introduced to identify Ethernet fault during node start up sequence New service features in SVC 4.2.1.4: Node error code 556 introduced to identify duplicate node WWNN condition New features in SVC 4.2.1: Incremental FlashCopy Cascaded FlashCopy Dynamically adjustable FlashCopy & Remote Copy limits Automatic cache partitioning Error code 1630 downgraded to a warning (no call home), and new error code 1627 introduced to call home if 1630 remains unresolved for too long New features in SVC 4.2.0.3: Email inventory and call home directly from the SVC cluster New features in SVC 4.2.0: Multiple Target FlashCopy 2145-8G4 Hardware Platform Fast Node Reset Increased FlashCopy & Remote Copy limits Role-based authentication APARs resolved in this release (4.2.1.8): Critical Fixes IC56226 Node asserts when upgrading with Metro Mirror relationships running IC56963 Cluster Error 1001 when creating incremental FlashCopy mappings after cluster upgrade High Importance Fixes IC55558 Node assert when using svctask addhostiogrp or rmhostiogrp commands IC56517 SVC write cache performance issues IC56559 Global TPRLO FC frame causes SVC warmstarts Suggested Fixes IC56811 Cluster Error 1627 logged in error when using storage controllers with multiple WWNNs APARs resolved in previous 4.2.x releases: 4.2.1.7 NONE 4.2.1.6 High Importance Fixes IC55060 Boot error code 250 for detection of Ethernet port initialization fault during node start up IC55494 Node assert when preparing node for livedump IC55742 Fix for SAN transport errors in 4Gbs Fibre Channel SAN environments IC55826 Additional SCSI qualifier (LONG BUSY) for host failover improvements with SDDPCM Suggested Fixes IC55531 Unable to start intra-cluster Remote Copy relationships when partnered with cluster running V4.2.0 or earlier IC55535 Performance improvements when using EMC Symmetrix storage subsystems 4.2.1.5 High Importance Fixes IC55683 Unable to create/delete vdisk-to-host mappings with SVC Console (GUI) due to incorrect output from svcinfo lsvdisk -filtervalue vdisk_UID= Suggested Fixes IC55315 Incorrect vdisk count reported in svcinfo lsmdiskgrp 4.2.1.4 Critical Fixes IC55200 Node asserts on primary cluster caused by degraded performance on secondary cluster during Remote Copy High Importance Fixes IC55005 Support for 2TB managed disk on Clariion storage IC55049 Node assert when filtervalue string is >15 characters Suggested Fixes IC54854 Remove underscore from email domain for callhome records IC55306 Fix to allow image mode migrations for certain target managed disk sizes 4.2.1.3 High Importance Fixes IC54979 Node assert caused by SVC statistics collection IC55162 Performance issue with SVC V4.2.1.0-4.2.1.2 IC55297 Node assert during software upgrade 4.2.1.2 Critical Fixes IC54753 Node assert during backout of failed software upgrade High Importance Fixes IC54716 Upgrade failure from V3.1.0.5 to V4.2 when running svcinfo lsvdisk IC54237 Node offline caused by a small timing window when restarting a node with FlashCopy enabled IC54161 Node assert caused by a small timing window in FlashCopy IC54537 Node Error 578 caused by intermittent inter-cluster link Suggested Fixes IC54611 svcinfo lsiostatsdumps may not display all files IC54342 Deleting a file using the SVC Console (GUI) returns a false error 4.2.1.1 Critical Fixes IC52530 Cluster Error 1001 caused by image mode vdisk going offline during migration to another mdisk group IC52574 Cluster Error 1001 or Node Error 90x caused by specifying a managed disk twice during vdisk creation IC53251 Lease Expiry caused by SAN fabric disruption IC54214 Node asserts caused by an invalid audit log entry IC54232 Node asserts during SVC software upgrades when using Metro Mirror High Importance Fixes IC49579 Node assert caused by fabric disruption IC53040 Node assert caused by removing vdiskhostmap from AIX host if ACA is set IC53323 Correctly handle 02,0b,4b,00 check conditions from Hitachi storage controllers IC53436 Node assert during internal Metro Mirror state change IC53485 Allow replacement node to use different switch ports IC53843 Node assert caused by command line timeout IC53877 Node Error 578 after power fluctuations IC53970 Node assert caused by duplicate Fibre Channel frames IC54094 Node assert during statistics collection Suggested Fixes IC46526 SVC will no longer allow an mdisk which is read only or reserved to be added to a managed disk group IC51755 Update lscontroller VPD when controller inquiry data is updated IC53109 Service Processor (BMC) firmware upgrade to resolve false voltage errors IC53274 SVC will no longer 1370 'Managed Disk ERP' messages if a controller does not have a LUN 0 IC53303 SVC does not correctly report frame size in FLOGI IC53546 Svcinfo lsvdisk does not update the name of an fcmap if it is changed IC53583 Cluster Error 2100 caused by full web server logs IC53881 Correctly handle vdisk UDID for OpenVMS SAN boot IC54035 Non-default host types not set correctly when adding a host to additional I/O groups 4.2.0.5 Critical Fixes IC54214 Node asserts caused by an invalid audit log entry IC54232 Node asserts during SVC software upgrades when using Metro Mirror 4.2.0.4 Critical Fixes IC53521 Performing a detailed host listing during a small timing window when a node is offline causes Cluster Error 1001 or Node Error 90x IC53655 Cluster Error 1001 caused by connectivity changes to storage controllers with multiple WWNNs Suggested Fixes IC53515 SVC calls home erroneously with transient errors 4.2.0.3 Critical Fixes IC53282 State change of a storage controller during image mode migration causes Node Error 900 High Importance Fixes IC51110 SVC sends large numbers of LOGO frames to storage controller after error inject IC52753 Improvements handling medium errors from storage controllers IC52869 Node assert using a two node and an eight node cluster in a Metro Mirror partnership IC53067 Low probability node assert caused by race condition Suggested Fixes IC52735 Timestamps in error log skewed during node failover IC53137 SVC V4.2 does not correctly handle timezones 4.2.0.2 Critical Fixes IC52988 Node Error 578 after site power failure 4.2.0.1 Critical Fixes IC49968 Deleting a vdisk whilst it is being formatted causes Node Error 900 IC51374 Lease Expiry during cache flush IC52232 Increase priority of lease renewal processing to avoid cluster lease expiry IC52295 Node assert caused by overlapping reads & writes to a vdisk during a node failover High Importance Fixes IC49691 Node assert when Kasha logs into SVC IC49950 Node assert caused by SAN or controller instability IC50330 Node assert during timing window when issuing SVC commands IC50646 Improve handling of Asymmetric LUN Access IC50713 Node assert due to inter-cluster link instability IC50768 Node assert caused by incorrectly formatted stats filenames IC51129 Node assert due to timing window for outstanding I/Os to offline vdisks IC51176 Improvements handling medium errors from storage controllers IC51394 Node assert caused by SAN or controller instability IC51637 Node assert caused by ethernet link negotiation on non configuration nodes IC51694 Node assert caused by lock contention IC51797 Node assert caused by illegally addressed SCSI Test Unit Ready from host IC52116 Node assert when deleting stats files IC52182 Repeated node asserts caused by failing 4 port FC Card IC52335 Node assert caused by lock contention IC52401 Node Error 900 caused by ethernet status change during software upgrade IC52414 Node assert during timing window when issuing SVC commands IC52469 Node assert caused by KCQ 02,04,03 from EMC controllers IC52534 Repeated node assert caused by Remote Copy recovery IC52559 Node assert caused by SAN or controller instability IC52597 Node assert handling aborted IO during RemoteCopy synchronization Suggested Fixes IC49298 Add mdisk ID field to Nm_stats file IC49526 Setting an SVC IP address to a value with leading 0s prevents ethernet access to the cluster IC50515 FlashCopy will now discard write cache data for the target vdisk when a FC map is stopped IC50751 Internal hard disk failure now reported as Cluster Error 1030 IC50805 SVC will no longer log 'LUN Discovery Failed' messages if a controller does not have a LUN 0 IC51605 SVC unable to detect DS8K controller after host ports changed from FICON to FCP IC51753 Firewall rule revised to allow 70 ICMP pings per minute IC51763 SVC may not correctly load balance mdisk access in large configurations IC51901 SVC may not slander ports during certain types of fabric disruption IC52292 Unprintable characters in storage controller serial number prevents successful configuration backups IC52507 Reduce the number of duplicate errors logged when communicating with storage controllers IC52508 SVC backend write performance degraded when under very high read workload ------------------------------------------------------------------------------- 6. Installation Instructions For New Clusters $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ NOTE: This procedure will destroy any existing data or configuration. If you wish to preserve the current SVC configuration and all data virtualized by the cluster then please follow the upgrade instructions given in Chapter 6 of the Configuration Guide. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ * This procedure is for new installations only. Please read the introduction again to make sure you understand which procedure you wish to use. * Follow the instructions in chapter 6 ("Creating a SAN Volume Controller cluster) of the Configuration Guide, refer to sections "Creating a cluster from the front panel" and "Creating a cluster using the SAN Volume Controller Console" in order to create a cluster on ONLY the first node. Do not add the other nodes at this point. * Follow the instructions in the section entitled "Viewing Cluster Properties" in chapter 7 ("Using the SAN Volume Controller Console") of the Configuration Guide to display the "code version". If the code version of the cluster is 4.2.1.8 (7.8.0807180000) then you do not need to perform any further actions. Continue to add nodes and configure your cluster as described in the configuration guide. Otherwise, continue to follow this procedure. * Ensure that you have set a unique IP address for service mode on the existing single node cluster. * Put the node in the existing single node cluster into Service Mode (See "Setting Service Mode" under "Recover cluster navigation" described in Chapter 6 ("Using the front panel of the SAN Volume Controller") of the Service Guide. 1. On the front panel, navigate to the 'Cluster' main field and then left to the 'Recover Cluster?' secondary field. 2. Press 'Select'. The screen should now say 'Service Access?'. 3. Press and hold the 'Down' button. 4. Press and release the 'Select' button. 5. Release the 'Down' button. The node will restart and display the Service IP address. The front panel buttons are disabled whilst in this state. * Apply the upgrade package. 1. Open a web browser and point it at https:// 2. Enter the admin or service user and password that was configured when you set up the one node Cluster. 3. Click "Upgrade Software" on the left side of the web page. 4. Click the "Upload" button and upload the IBM2145_INSTALL_4.2.1.8 file. 5. Once the upload completes, press the "Continue" button. This will take you to a page with a list of available upgrade packages. 6. Select the File you just uploaded from the list of available software upgrade files and check the "Skip prerequisite checking" box. Click the "Apply" button. 7. Click the "Confirm" Button. 8. The node will now reboot and apply the new software. Note: An SVC dump file may be generated during this upgrade. This is expected and can be ignored. * Once upgraded, create a new cluster on the recently upgraded node. Instructions for this are in Chapter 6 ("Creating a SAN Volume Controller cluster") of the Configuration Guide. At this point you will have a new one node cluster running 4.2.1.8 code. * After this process is complete, check that the software version number is 4.2.1.8 (7.8.0807180000). * You can now add the other nodes to the cluster and they will automatically be upgraded. ------------------------------------------------------------------------------- 7. Further Documentation All publications can be downloaded from the support website: http://www.ibm.com/storage/support/2145 ------------------------------------------------------------------------------- 8. Known Issues and Restrictions in this Level The current product limitations are detailed in the product restrictions document available from: http://www.ibm.com/storage/support/2145 Please read all the warnings in section 3 (pre-requisites). SVC V4.2.0 introduced new FlashCopy (FC) behaviour. If a 'svctask stopfcmap' command is issued to a FC mapping which is in the idle_or_copied state then the mapping will stay in the idle_or_copied state, rather than changing into the stopped state. Please read the following flashes before upgrading to SVC Console (GUI) V4.2.1: Known Issues with SVC Console (GUI) V4.2.1.xxx http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003188 Please read the following flashes before upgrading to SVC V4.2.1.8: Potential Issue When Upgrading From SVC V4.1.1.0/1/2 http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003069 Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002971 If an SVC Code Upgrade Stalls or Fails then Contact IBM Support for Further Assistance http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002894 ------------------------------------------------------------------------------- 9. Maximum Configurations The maximum configurations for SVC V4.2.1.8 are documented in the SVC V4.2.x Restrictions document which is available from: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003096 ------------------------------------------------------------------------------- 10. Licensing Information Licenses relevant to SAN Volume Controller can be viewed using a web browser with access to the SVC cluster (such as the Internet Explorer browser on the SVC Master Console) via the following URL: http:///notices.html where is the network name or the IP address of the cluster. -------------------------------------------------------------------------------