IBM TotalStorage SAN Volume Controller ++++++++++++++++++++++++++++++++++++++ ---------------------------------------------------------------------------- CONTENTS 1. Introduction 2. Available education. 3. Pre-Requisites 4. Code Levels 5. Installation Instructions 6. Further Documentation 7. Known issues and restrictions in this level 8. Maximum configurations. 9. New features in this build. ---------------------------------------------------------------------------- 1. INTRODUCTION This document describes how to install version 3.1.0.0 of the IBM TotalStorage SAN Volume Controller (2145) software. This release of software is a new release and adds features as detailed in section 9. Please refer to the Recommended Software List and Supported Hardware List on the support website: http://www.ibm.com/storage/support/2145. --------------------------------------------------------------------------- 2. Available education A course on SAN Volume Controller Planning and Implementation is available. For further information or enrolment, contact your local IBM representative. --------------------------------------------------------------------------- 3. Pre-requisites Before installing this code level please check the following pre-requisites are met. Also please note the concurrent upgrade (upgrade with I/O running) restrictions. This may apply if you have not upgraded the base code since adding new SVC nodes. Check there are no errors in the error log or the front panel. Use normal service procedures to resolve these errors before proceeding. Code levels 2.1.0.0 and above contain a self checking mechanism to ensure that each UPS is correctly cabled i.e. each node's signal and power cable are connected to the same UPS. Please check before starting this code upgrade to ensure the cables are correctly connected. If they are incorrectly cabled please shutdown the cluster as detailed in the configuration guide, correct the cabling, restart the cluster, then proceed with upgrade. If you are upgrading from a code level below 3.1.0.0, it is important that you upgrade the SVC Console Code (GUI) before you install the SVC code. The latest level of the GUI (3.1.0.532) is available from the same website as this code. This latest level of GUI will work with older SVC levels as well as 3.1.0.0, older levels of GUI code will not work with SVC 3.1.0.0. The preferred method of installing this is a clean install as detailed in section 5. If a clean install is not possible then the SVC cluster must be at a release level of at least 1.2.0.0 (0.53.0404190000) to be upgraded with no loss of data or configuration. If you are upgrading from 1.1.x.x level of SVC you will have to upgrade to a 1.2.x.x level of SVC code before upgrading to a 2.1.x.x or 3.1.x.x level. For existing clusters please check which level your cluster is currently running before proceeding. Refer to the Configuration Guide (v3.1.0) - Chapter 3 ("Using SAN Volume Controller Console"), section "viewing cluster properties". Look for "Code Version". If your cluster is at 1.2.0.0 or higher then please follow the upgrade instructions as given in the Configuration Guide (v3.1.0) - Chapter 6 ("Upgrading the SAN volume controller software)", section "Upgrading the SAN Volume Controller software using the SAN Volume Controller Console" Please refer to: http://www-1.ibm.com/support/docview.wss?rs=591&context=STPVGU& context=STPVFV&q1=ssg1*&uid=ssg1S1001707&loc=en_US&cs=utf-8&lang=en to see all release levels of SVC. If your cluster is not at one of these levels please contact IBM Support before attempting to upgrade the software. If you are installing a new 2145 then you will need to follow the procedure shown below in section 5. This will upgrade the SVCs to the 3.1.0.0 level before the cluster is configured for use. Please note the warning at the beginning of the process and only proceed if you are sure there is no data on this machine. With the extended host support in this build the limit of 256 hosts per IO group is policed. 2.1.0.5 introduced a change that checks for the creation of more than 256 host objects per cluster and issues the following error message should the user attempt to exceed this limit: CMMVC6034E The action failed as the maximum number of objects has been reached. If you have more than 256 host id's the code upgrade to 3.1 will fail and back out gracefully. Please check the website for supported fabrics and restrictions. --------------------------------------------------------------------------- 4. CODE LEVELS 2145 Software: 2145 Release Level 3.1.0.0 (3.17.0511040000) ---------------------------------------------------------------------------- 5. INSTALLATION INSTRUCTIONS $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ NOTE: This procedure will destroy any existing data or configuration. If you wish to preserve the SVC configuration and all data virtualized by the cluster then please follow the upgrade instructions given in Chapter 6 of the Configuration Guide. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ * This procedure is for new installations only. Please read the introduction again to make sure you understand which procedure you wish to use. * Follow the instructions in chapter 2 ("Creating a SAN Volume Controller cluster) of the Configuration Guide, refer to the section "Creating a cluster using the SAN Volume Controller Console" in order to create a cluster on ONLY the first node. Do not add the other nodes at this point. * Use "viewing cluster properties" in chapter 3 ("Using the SAN Volume Controller Console") of the Configuration guide to display the "code version". If it shows 3.1.0.0 (3.17.0511040000) then you do not need to perform any further actions. Continue to add nodes and configure your cluster as described in the configuration guide. If the reported version is 1.2.0.0, 1.2.0.1, 1.2.0.2, 1.2.1.0, 1.2.1.1, 2.1.0.0, 2.1.0.1, 2.1.0.2 ,2.1.0.3, 2.1.0.4 or 2.1.0.5 then continue to follow this procedure. If the code level is 1.1.0.x, 1.1.1.0, 1.1.1.1, 1.1.1.2 or 1.1.1.3 you can continue to use this procedure only if you do not have hosts attached. Alternatively upgrade to a 1.2.x.x level first. * Ensure that you have set a unique IP address for service mode. * Put the node into Service Mode (See "Setting service mode" under "Recover cluster navigation" described in Chapter 5 ("Using the front panel of the SAN Volume Controller") of the Service Guide) 1. On the front panel, navigate to the 'Cluster' main field and then left to the 'Recover Cluster?' secondary field. 2. Press 'Select'. The screen should now say 'Service Access?' 3. Press and hold the 'Down' button. 4. Press and release the 'Select' button. 5. Release the 'Down' button. The node will restart and display the Service IP address. The front panel buttons are disabled whilst in this state. * Apply the upgrade package. 1. Follow the instructions in chapter 6 of the Configuration Guide to install the IBM2145_INSTALL_3.1.0.0 package. Note: At step 4 of the instructions, ensure you select the option to 'Skip prerequisite checking'. * Once upgraded, recreate the cluster on the node, instructions are in chapter 2 (Creating a SAN Volume Controller cluster) of the Configuration Guide. You can now add the other nodes to the cluster and they will automatically be upgraded to 3.1.0.0 (3.17.0511040000). * After this process is complete, check that the software version number is 3.1.0.0 (3.17.0511040000) on the Cluster Properties panel as described in chapter 3 of the Configuration Guide. ---------------------------------------------------------------------------- 6. FURTHER DOCUMENTATION All publications are available on the master console or they can be downloaded from the support website: http://www.ibm.com/storage/support/2145 ---------------------------------------------------------------------------- 7. Known Issues with this build. The current product limitations are detailed in the product restrictions document available here: http://www.ibm.com/storage/support/2145 This build contains a self checking mechanism to ensure that each UPS is correctly cabled i.e. each node's signal and power cable are connected to the same UPS. The serial cable must always be connected to the same node as the power connection. Please read all the warnings in section 3 (pre-requisites). If you start the code upgrade with faulty cabling the upgrade will fail with node error 230 and you will need to follow a service procedure to resolve the fault. A Hint & Tip is available for this problem. Please visit http://www.ibm.com/storage/support/2145 for more details. Please note: All versions of code since 2.1.0.0 have the following command line syntax change. In earlier versions the output of the command svcinfo lsrcconsistgrp returns in it's output the current state of the relationship. e.g consistent_synchronised. In all versions since 2.1.0.0 the same command returns in it's output for the same state consistent_synchronized. The change was made to be consistent with the American English used for all other commands. In release 3.1.0.0 a change was introduced to include the cluster name in the command line prompt. This is the reason the SAN Volume Controller Console code (GUI) must be upgraded prior to upgrading the SVC code as older versions of the GUI will not work with this change. The revised command prompt may affect any scripts you have written. Please amend any scripts you may use to reflect these changes. ---------------------------------------------------------------------------- 8. Maximum configurations. The following lists show the maximum resources configurable in this release (maximum cluster size). To avoid potential problems, any exceptions to supported interoperability MUST be requested via a one-off request. For example, in large fabric configurations, exceeding the maximum number of 64 hosts supported by SVC release 2.1 must be approved via a one-off request VDisks Maximum number/cluster 4096 Maximum number per IO group 1024 Maximum size 2 TB (Linux & AIX 5.1: 1 TB) Maximum SCSI mappings/cluster 20 K Maximum SCSI mappings/host 512 Maximum VDisk Paths/host 8 (separately zoned Host Port to Node Port) MDisks Maximum number/cluster 4096 Maximum extents/cluster 4 M ( 2^22 = 4194304) Maximum Mdisk Groups/cluster 128 Hosts Maximum number/cluster 1024 Maximum number / IO group 256 Maximum Host WWPNS/cluster 2048 Maximum HBA Ports/host 4 Fabric Maximum SAN Ports/fabric 512 Maximum Fabrics/cluster 4 Maximum Nodes/cluster 8 Storage Maximum Controllers/cluster 64 Maximum Controller Ports/cluster 128 Flash Copy Maximum number of Consistency Groups 255 Maximum concurrent Flash Copies 50% VDisks to 50% VDisks Remote Copy Maximum number of Consistency Groups 255 Maximum concurrent Remote Copies 1024 Data Migration Maximum concurrent Data Migrations 32 Persistent Reservations Maximum Registrations/cluster 128 K ---------------------------------------------------------------------------- 9. New features in this build Support of 1024 host objects is now available with certain fabrics, please visit http://www.ibm.com/storage/support/2145 to see how to configure your SAN for this increased number of hosts . Increased interoperability with new HBA's, Switches, Storage, Operating systems and clustering support. Please visit the website to see the latest additions. Vdisks may now be created with the cache disabled, this will enable the backend storage controller copy services to be used if desired. Please visit the website for more details on implementation and restrictions. Introduces new storage engines with more powerful processors and 2X the cache size (to 8GB) to improve performance Concurrent update of storage controller firmware, restrictions lifted, please check website for more details. APARs resolved in this release; APAR DEFECT IC46593 Bad login during LUN discovery causes assert. IC46819 Node failure can cause outstanding FC commands to impact other nodes. IC47533 Errors generated for certain migrations that have stopped can cause asserts when error is cleared. One Offs requiring this build: None