IBM TotalStorage SAN Volume Controller ++++++++++++++++++++++++++++++++++++++ ---------------------------------------------------------------------------- CONTENTS 1. Introduction 2. Available education. 3. Pre-Requisites 4. Code Levels 5. Installation Instructions 6. Further Documentation 7. Known issues and restrictions in this level 8. Maximum configurations. 9. New features in this build. ---------------------------------------------------------------------------- 1. INTRODUCTION This document describes how to install version 2.1.0.6 of the IBM TotalStorage SAN Volume Controller (2145) software. This release of software is a service release, it resolves APAR's as detailed in section 9. It also includes other changes to the code to improve reliability and serviceability. Please refer to the Recommended Software List and Supported Hardware List on the support website: http://www.ibm.com/storage/support/2145. --------------------------------------------------------------------------- 2. Available education A course on SAN Volume Controller Planning and Implementation is available. For further information or enrolment, contact your local IBM representative. --------------------------------------------------------------------------- 3. Pre-requisites Before installing this code level please check the following pre-requisites are met. Also please note the concurrent upgrade (upgrade with I/O running) restrictions. This may apply if you have not upgraded the base code since adding new SVC nodes. Check there are no errors in the error log or the front panel. Use normal service procedures to resolve these errors before proceeding. Code levels 2.1.0.0 and above contain a self checking mechanism to ensure that each UPS is correctly cabled i.e. each node's signal and power cable are connected to the same UPS. Please check before starting this code upgrade to ensure the cables are correctly connected. If they are incorrectly cabled please shutdown the cluster as detailed in the configuration guide, correct the cabling, restart the cluster, then proceed with upgrade. If you are upgrading from a code level below 2.1.0.0, it is important that you upgrade the SVC Console Code (GUI) after you install the SVC code. The latest level of the GUI is available from the same website as this code. The preferred method of installing this is a clean install as detailed in section 5. If a clean install is not possible then the SVC cluster must be at a release level of at least 1.1.1.2 (0.33.0401010000) to be upgraded with no loss of data or configuration. For existing clusters please check which level your cluster is currently running before proceeding. Refer to the Configuration Guide (v2.1.0) - Chapter 3 ("SAN Volume Controller Console"), section "Overview of creating a cluster using the SAN Volume Controller Console", subsection "Displaying cluster properties using the SAN Volume Controller Console". Look for "Code Version". If your cluster is at 1.1.1.2 or higher then please follow the upgrade instructions as given in the Configuration Guide (v2.1.0) - Chapter 6 ("Software upgrade strategy)", section "Upgrading the SAN Volume Controller firmware using the SAN Volume Controller Console" Please refer to: http://www-1.ibm.com/support/docview.wss?rs=591&context=STPVGU& context=STPVFV&q1=ssg1*&uid=ssg1S1001707&loc=en_US&cs=utf-8&lang=en to see all release levels of SVC. If your cluster is not at one of these levels please contact IBM Support before attempting to upgrade the software. If you are installing a new 2145 then you will need to follow the procedure shown below in section 5. This will upgrade the SVCs to the 2.1.0.6 level before the cluster is configured for use. Please note the warning at the beginning of the process and only proceed if you are sure there is no data on this machine. --------------------------------------------------------------------------- 4. CODE LEVELS 2145 Software: 2145 Release Level 2.1.0.6 (2.21.0602100000) ---------------------------------------------------------------------------- 5. INSTALLATION INSTRUCTIONS $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ NOTE: This procedure will destroy any existing data or configuration. If you wish to preserve the SVC configuration and all data virtualized by the cluster then please follow the upgrade instructions given in Chapter 6 of the Configuration Guide. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ * This procedure is for new installations only. Please read the introduction again to make sure you understand which procedure you wish to use. This procedure is not subject to the same CCU matrix restrictions as a concurrent upgrade. * Follow the instructions in chapter 3 ("Overview of creating a cluster using the SAN Volume Controller Console") of the Configuration Guide in order to create a cluster on ONLY the first node. Do not add the other nodes at this point. * Use "Displaying cluster properties using the SAN Volume Controller Console" in chapter 3 of the Configuration guide to display the "code version". If it shows 2.1.0.6 (2.21.0602100000) then you do not need to perform any further actions. Continue to add nodes and configure your cluster as described in the configuration guide. If the reported version is 1.1.0.x, 1.1.1.0, 1.1.1.1, 1.1.1.2, 1.1.1.3, 1.2.0.0, 1.2.0.1, 1.2.0.2, 1.2.1.0, 1.2.1.1, 2.1.0.0, 2.1.0.1, 2.1.0.2 ,2.1.0.3, 2.1.0.4 or 2.1.0.5 then continue to follow this procedure. * Ensure that you have set a unique IP address for service mode. * Put the node into Service Mode (See "Service access navigation" described in Chapter 5 ("Using the front panel of the SAN Volume Controller") of the Service Guide) 1. On the front panel, navigate to the 'Cluster' main field and then left to the 'Recover Cluster?' secondary field. 2. Press 'Select'. The screen should now say 'Service Access?' 3. Press and hold the 'Down' button. 4. Press and release the 'Select' button. 5. Release the 'Down' button. The node will restart and display the Service IP address. The front panel buttons are disabled whilst in this state. * Apply the upgrade package. 1. Follow the instructions in chapter 6 of the Configuration Guide to install the IBM2145_INSTALL_2.1.0.6 package. Note: At step 4 of the instructions, ensure you select the option to 'Skip prerequisite checking'. * Once upgraded, recreate the cluster on the node, instructions are in chapter 3 (Overview of creating a cluster using the SAN Volume Controller Console) of the Configuration Guide. You can now add the other nodes to the cluster and they will automatically be upgraded to 2.1.0.6 (2.21.0602100000). * After this process is complete, check that the software version number is 2.1.0.6 (2.21.0602100000)on the Cluster Properties panel as described in chapter 3 of the Configuration Guide. ---------------------------------------------------------------------------- 6. FURTHER DOCUMENTATION All publications are available on the master console or they can be downloaded from the support website: http://www.ibm.com/storage/support/2145 ---------------------------------------------------------------------------- 7. Known Issues with this build. The current product limitations are detailed in the product restrictions document available here: http://www.ibm.com/storage/support/2145 This build contains a self checking mechanism to ensure that each UPS is correctly cabled i.e. each node's signal and power cable are connected to the same UPS. The serial cable must always be connected to the same node as the power connection. Please read the warning in section 3 (pre-requisites). If you start the code upgrade with faulty cabling the upgrade will fail with node error 230 and you will need to follow a service procedure to resolve the fault. A Hint & Tip is available for this problem. Please visit http://www.ibm.com/storage/support/2145 for more details. Please note: All versions of code since 2.1.0.0 have the following commandline syntax change. In earlier versions the output of the command svcinfo lsrcconsistgrp returns in it's output the current state of the relationship. e.g consistent_synchronised. In all versions since 2.1.0.0 the same command returns in it's output for the same state consistent_synchronized. The change was made to be consistent with the American English used for all other commands. Please ammend any scripts you may use to reflect this change. ---------------------------------------------------------------------------- 8. Maximum configurations. The following lists show the maximum resources configurable in this release (maximum cluster size). To avoid potential problems, any exceptions to supported interoperability MUST be requested via a one-off request. For example, in large fabric configurations, exceeding the maximum number of 64 hosts supported by SVC release 2.1 must be approved via a one-off request VDisks Maximum number/cluster 4096 Maximum number per IO group 1024 Maximum size 2 TB (Linux & AIX 5.1: 1 TB) Maximum SCSI mappings/cluster 20 K Maximum SCSI mappings/host 512 Maximum VDisk Paths/host 8 (separately zoned Host Port to Node Port) MDisks Maximum number/cluster 4096 Maximum extents/cluster 4 M ( 2^22 = 4194304) Maximum Mdisk Groups/cluster 128 Hosts Maximum number/cluster 64/256 on Cisco Fabrics Maximum Host WWPNS/cluster 128/512 on Cisco Fabrics Maximum HBA Ports/host 4 Fabric Maximum SAN Ports/fabric 256/512 on Cisco Fabrics Maximum Fabrics/cluster 2 Maximum Nodes/cluster 8 Storage Maximum Controllers/cluster 64 Maximum Controller Ports/cluster 128 Flash Copy Maximum number of Consistency Groups 255 Maximum concurrent Flash Copies 50% VDisks to 50% VDisks Remote Copy Maximum number of Consistency Groups 255 Maximum concurrent Remote Copies 1024 Data Migration Maximum concurrent Data Migrations 32 Persistent Reservations Maximum Registrations/cluster 132 K ---------------------------------------------------------------------------- 9. New features / problems resolved in this build New Features: Support of 256 host objects is now available with Cisco Fabrics, please visit http://www.ibm.com/storage/support/2145 to see how to configure your SAN for this increased number of hosts . This has required a change to SVC which was made in SVC PTF 2.1.0.4 This checks for the creation of more than 256 host objects per cluster and issues the following error message should the user attempt to exceed this limit: CMMVC6034E The action failed as the maximum number of objects has been reached. Users are currently limited to a maximum of 64 hosts objects unless they are using Cisco fabrics or have an approved Open Software One-off Request which allows them to attach a greater number of hosts in their particular environment. In the remote chance that a user has attached more than 256 hosts without an approved One-off Request or has created more than 256 host objects, upgrading the cluster to SVC 2.1.0.6 will not fail but the user will be prevented by the code from adding any additional host objects in the future. In this case, the user should contact their local IBM Support office for advice. APARs resolved in this release; APAR DEFECT IC48717 Problem if 64 controllers then 1 or more removed and then config node failover APARs fixed in previous 2.1.x.x releases; v2.1.0.5 IC46820 Prevent maximum extents limit being exceeded. v2.1.0.4 IC46371 HL assert for reserve release problem. IC46453 HL assert if multiple 'ordered' cmds sent to vdisk. IC46545 CO Flash copy assert IC46629 HL assert fix IC46654 VG assert fix IC46741 HL assert for insufficient base event resources. IC46799 Backend miss-configuration assert fix. V2.1.0.3 IC45638 Null UID causing migrate to hang, cluster 900 fix. IC45786 Service controller fix. IC46120 UPS fix for long deglitch on battery discharge. IC43785 Shutdown fix to stop 1620 errors on restart. V2.1.0.2 IC46276 PLTM 497 day timer wrap problem V2.1.0.1 IC44662 Support more ISLs between nodes IC44659 v_stats files not being produced IC45395 Remove checking of vendor unique mode pages for HDS Thunder One Offs requiring this build: None