IBM TotalStorage SAN Volume Controller ++++++++++++++++++++++++++++++++++++++ ---------------------------------------------------------------------------- CONTENTS 1. Introduction 2. Available education. 3. Pre-Requisites 4. Code Levels 5. Installation Instructions 6. Further Documentation 7. Known issues and restrictions in this level 8. Maximum configurations. 9. New features in this build. ---------------------------------------------------------------------------- 1. INTRODUCTION This document describes how to install version 3.1.0.5 of the IBM TotalStorage SAN Volume Controller (2145) software. This release of software is a service release, it addresses a number of problems and adds additional service features as detailed in section 9. Please refer to the Recommended Software List and Supported Hardware List on the support website: http://www.ibm.com/storage/support/2145. --------------------------------------------------------------------------- 2. Available education A course on SAN Volume Controller Planning and Implementation is available. For further information or enrolment, contact your local IBM representative. --------------------------------------------------------------------------- 3. Pre-requisites Before installing this code level please check the following pre-requisites are met. Also please note the concurrent upgrade (upgrade with I/O running) restrictions. This may apply if you have not upgraded the base code since adding new SVC nodes. Check there are no errors in the error log or the front panel. Use normal service procedures to resolve these errors before proceeding. SVC V2.1.0.0 and above contain a self checking mechanism to ensure that each UPS is correctly cabled i.e. each node's signal and power cable are connected to the same UPS. Please check before starting this code upgrade to ensure the cables are correctly connected. If they are incorrectly cabled please shutdown the cluster as detailed in the configuration guide, correct the cabling, restart the cluster, then proceed with upgrade. If you are upgrading from a code level below 3.1.0.0, it is important that you upgrade the SVC Console Code (GUI) before you install the SVC code. The latest level of the GUI (3.1.0.548 or later) is available from the same website as this code. This latest level of GUI will work with older SVC levels as well as 3.1.x.x, older levels of GUI code will not work with SVC 3.1.x.x. The preferred method of installing this is a clean install as detailed in section 5. If a clean install is not possible then the SVC cluster must be at a release level of at least 1.2.0.0 to be upgraded with no loss of data or configuration. If you are upgrading from 1.1.x.x level of SVC you will have to upgrade to at least a 1.2 level of SVC code before upgrading to a 3.1.x.x level. For existing clusters please check which level your cluster is currently running before proceeding. Refer to the Configuration Guide (v3.1.0) - Chapter 3 ("Using SAN Volume Controller Console"), section "viewing cluster properties". Look for "Code Version". If your cluster is at 1.2.0.0 or higher then please follow the upgrade instructions as given in the Configuration Guide (v3.1.0) - Chapter 6 ("Upgrading the SAN volume controller software)", section "Upgrading the SAN Volume Controller software using the SAN Volume Controller Console" Please refer to: http://www.ibm.com/storage/support/software/sanvc/code to see all release levels of SVC. If your cluster is not at one of these levels please contact IBM Support before attempting to upgrade the software. If you are installing a new 2145 then you will need to follow the procedure shown below in section 5. This will upgrade the SVCs to the 3.1.0.5 level before the cluster is configured for use. Please note the warning at the beginning of the process and only proceed if you are sure there is no data on this machine. With the extended host support in this build the limit of 256 hosts per IO group is policed. 2.1.0.5 introduced a change that checks for the creation of more than 256 host objects per cluster and issues the following error message should the user attempt to exceed this limit: CMMVC6034E The action failed as the maximum number of objects has been reached. If you have more than 256 host id's the code upgrade to 3.1 will fail and back out gracefully. Please check the website for supported fabrics and restrictions. --------------------------------------------------------------------------- 4. CODE LEVELS 2145 Software: 2145 Release Level 3.1.0.5 (3.24.0605220000) ---------------------------------------------------------------------------- 5. INSTALLATION INSTRUCTIONS $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ NOTE: This procedure will destroy any existing data or configuration. If you wish to preserve the SVC configuration and all data virtualized by the cluster then please follow the upgrade instructions given in Chapter 6 of the Configuration Guide. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ * This procedure is for new installations only. Please read the introduction again to make sure you understand which procedure you wish to use. * Follow the instructions in chapter 2 ("Creating a SAN Volume Controller cluster) of the Configuration Guide, refer to the section "Creating a cluster using the SAN Volume Controller Console" in order to create a cluster on ONLY the first node. Do not add the other nodes at this point. * Use "viewing cluster properties" in chapter 3 ("Using the SAN Volume Controller Console") of the Configuration guide to display the "code version". If it shows 3.1.0.5 (3.24.0605220000) then you do not need to perform any further actions. Continue to add nodes and configure your cluster as described in the configuration guide. If the reported version is 1.2.0.0, 1.2.0.1, 1.2.0.2, 1.2.1.0, 1.2.1.1, 2.1.0.0, 2.1.0.1, 2.1.0.2 ,2.1.0.3, 2.1.0.4, 2.1.0.5, 3.1.0.0, 3.1.0.1, 3.1.0.2, 3.1.0.3 or 3.1.0.4 then continue to follow this procedure. If the code level is 1.1.0.x, 1.1.1.0, 1.1.1.1, 1.1.1.2 or 1.1.1.3 you can continue to use this procedure only if you do not have hosts attached. Alternatively upgrade to a 1.2.x.x level first. * Ensure that you have set a unique IP address for service mode. * Put the node into Service Mode (See "Setting service mode" under "Recover cluster navigation" described in Chapter 5 ("Using the front panel of the SAN Volume Controller") of the Service Guide) 1. On the front panel, navigate to the 'Cluster' main field and then left to the 'Recover Cluster?' secondary field. 2. Press 'Select'. The screen should now say 'Service Access?' 3. Press and hold the 'Down' button. 4. Press and release the 'Select' button. 5. Release the 'Down' button. The node will restart and display the Service IP address. The front panel buttons are disabled whilst in this state. * Apply the upgrade package. 1. Follow the instructions in chapter 6 of the Configuration Guide to install the IBM2145_INSTALL_3.1.0.5 package. Note: At step 4 of the instructions, ensure you select the option to 'Skip prerequisite checking'. * Once upgraded, recreate the cluster on the node, instructions are in chapter 2 (Creating a SAN Volume Controller cluster) of the Configuration Guide. You can now add the other nodes to the cluster and they will automatically be upgraded to 3.1.0.5 (3.24.0605220000). * After this process is complete, check that the software version number is 3.1.0.5 (3.24.0605220000) on the Cluster Properties panel as described in chapter 3 of the Configuration Guide. ---------------------------------------------------------------------------- 6. FURTHER DOCUMENTATION All publications are available on the master console or they can be downloaded from the support website: http://www.ibm.com/storage/support/2145 ---------------------------------------------------------------------------- 7. Known Issues with this build. The current product limitations are detailed in the product restrictions document available here: http://www.ibm.com/storage/support/2145 This build contains a self checking mechanism to ensure that each UPS is correctly cabled i.e. each node's signal and power cable are connected to the same UPS. The serial cable must always be connected to the same node as the power connection. Please read all the warnings in section 3 (pre-requisites). If you start the code upgrade with faulty cabling the upgrade will fail with node error 230 and you will need to follow a service procedure to resolve the fault. A Hint & Tip is available for this problem. Please visit http://www.ibm.com/storage/support/2145 for more details. Please note: All versions of code since 2.1.0.0 have the following command line syntax change. In earlier versions the output of the command svcinfo lsrcconsistgrp returns in it's output the current state of the relationship. e.g consistent_synchronised. In all versions since 2.1.0.0 the same command returns in it's output for the same state consistent_synchronized. The change was made to be consistent with the American English used for all other commands. In release 3.1.0.0 a change was introduced to include the cluster name in the command line prompt. This is the reason the SAN Volume Controller Console code (GUI) must be upgraded prior to upgrading the SVC code as older versions of the GUI will not work with this change. Please be advised that there is a problem managing clusters running SVC code < 3.1.x.x with the new level of GUI as detailed in the following Hint and Tip - http://www-1.ibm.com/support/docview.wss? rs=591&context=STCELXD&context=STCELX4&dc=DB500&uid=ssg1S1002778 &loc=en_US&cs=utf-8&lang=en The revised command prompt may affect any scripts you have written. Please amend any scripts you may use to reflect these changes. ---------------------------------------------------------------------------- 8. Maximum configurations. The following lists show the maximum resources configurable in this release (maximum cluster size). To avoid potential problems, any exceptions to supported interoperability MUST be requested via a one-off request. VDisks Maximum number/cluster 4096 Maximum number per IO group 1024 Maximum size 2 TB (Linux & AIX 5.1: 1 TB) Maximum SCSI mappings/cluster 20 K Maximum SCSI mappings/host 512 Maximum VDisk Paths/host 8 (separately zoned Host Port to Node Port) MDisks Maximum number/cluster 4096 Maximum extents/cluster 4 M ( 2^22 = 4194304) Maximum Mdisk Groups/cluster 128 The Host maxima are affected by the type of fabric being used. The maxima listed here are the largest supported values. Please see the V3.1.x Configuration Requirements and Guidelines document for detailed host maximas. Hosts Maximum number/iogrp 256 Maximum number/cluster 1024 Maximum Host WWPNS/cluster 2048 Maximum HBA Ports/host 512 Maximum HBA Ports/iogrp 512 Fabric Maximum SAN Ports/fabric 512 Maximum Fabrics/cluster 4 Maximum Clusters/fabric 2 Maximum Nodes/cluster 8 Storage Maximum Controllers/cluster 64 Maximum Controller Ports/cluster 128 Flash Copy Maximum number of Consistency Groups 255 Maximum concurrent Flash Copies 50% VDisks to 50% VDisks Remote Copy Maximum number of Consistency Groups 255 Maximum concurrent Remote Copies 1024 Data Migration Maximum concurrent Data Migrations 32 Persistent Reservations Maximum Registrations/cluster 128 K ---------------------------------------------------------------------------- 9. Problems resolved and new features in this build Since 3.1.0.0 support of 1024 host objects is now available with certain fabrics, please visit http://www.ibm.com/storage/support/2145 to see how to configure your SAN for this increased number of hosts . APARs resolved in this release; IC49259 Fix to prevent Slow I/O response times from internal node HDD causing SVC node asserts IC48586 Fix to prevent duplicate WWPNs causing Cluster 900 APARs fixed in previous releases: 3.1.0.4 APAR DEFECT IC48376 Fix for Node Error 578 on Configuration Node IC48740 Fix for Assert when using non-standard FC frame size IC48333 Fix for SCSI reserves on mdisks resulting in excluded controller ports 3.1.0.3 APAR DEFECT IC47941 Code upgrade fix when node is deleted during upgrade IC48157 Support for 2TB LUN on AIX IC48158 Additional fix for deleting vdisk during migrate IC48177 Fix for hosts trying to access same vdisk at same time IC48184 Fix for node reboot when collecting stats IC48204 Fix for stuck migrations IC48488 svc_snap fix for incorrect file permissions IC48543 Fix to Flash Copy vdisk state IC48745 5115 UPS temperature threshold change IC48870 Fix for front panel display updates 3.1.0.2 APAR DEFECT IC47905 Closing host login when IO is "stuck" caused node reboot.