This is an old revision of the document!
Easier way to upgrade is using GUI to upgrade
Precheck:
Download the test utility (required) and package upgrade
Start the upgrade in full automatic (best practice)
During install, you have to check
IBM_Storwize:V7K01:admin>lsupdate status system_updating event_sequence_number progress 74 estimated_completion_time 210206122942 suggested_action wait system_new_code_level 8.3.1.3 (build 150.25.2012041757000) system_forced no system_next_node_status updating system_next_node_time system_next_node_id 3 system_next_node_name node1 system_next_pause_time
On storage arrays
IBM_Storwize:V7K01:admin>lsnodecanister id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number site_id site_name 1 node2dn 500507681000xxxx online 0 v7k01_5 no 700 iqn.1986-03.com.ibm:2145.v7k05.node2dn 01-2 1 2 78E01ZP 1 site1 2 node1up 500507681000xxxx online 0 v7k01_5 yes 700 iqn.1986-03.com.ibm:2145.v7k05.node1up 01-1 1 1 78E01ZP 1 site1 3 node1 500507681000xxxx offline 1 v7k02_5 no 700 iqn.1986-03.com.ibm:2145.v7k05.node1 07-1 7 1 78E01T7 2 site2 4 node2 500507681000xxxx online 1 v7k02_5 no 700 iqn.1986-03.com.ibm:2145.v7k05.node2 07-2 7 2 78E01T7 2 site2
Sur le cluster SVC
IBM_2145:SVC01:admin>lsnode id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number site_id site_name 21 SVC02_1B 500507680C00xxx online 0 io_grp0 yes SV1 iqn.1986-03.com.ibm:2145.svc02_1b 78GLxxx 1 1 site1 19 SVC01_1B 500507680C00xxx online 0 io_grp0 no SV1 iqn.1986-03.com.ibm:2145.svc01_1b 78GLxxx 1 2 site2 14 SVC02_2B 500507680C00xxx online 1 io_grp1 no SV1 iqn.1986-03.com.ibm:2145.svc02_2b 78GLxxx 1 1 site1 20 SVC01_2B 500507680C00xxx online 1 io_grp1 no SV1 iqn.1986-03.com.ibm:2145.svc01_2b 78GLxxx 1 2 site2 16 SVC01_3B 500507680C00xxx online 2 io_grp2 no SV1 iqn.1986-03.com.ibm:2145.svc01_3b 78GLxxx 1 1 site1 17 SVC02_3B 500507680C00xxx online 2 io_grp2 no SV1 iqn.1986-03.com.ibm:2145.svc02_b 78GLxxx 1 2 site2 12 SVC01_4B 500507680C00xxx online 3 io_grp3 no SV1 iqn.1986-03.com.ibm:2145.svc01_4b 78GLxxx 1 1 site1 13 SVC02_4B 500507680C00xxx online 3 io_grp3 no SV1 iqn.1986-03.com.ibm:2145.svc02_4b 78GLxxx 1 2 site2
On Hyperswap cluster : v7k… and on SVC, check quorums, specificaly IP (an update is maybe useful)
All quorums must be **online** and one IP quorum **active**
IBM_Storwize:V7K01:admin>lsquorum quorum_index status id name controller_id controller_name active object_type override site_id site_name 0 online 3 no drive no 1 site1 1 degraded 141 no drive no 2 site2 3 online yes device no quorum01.intra/10.10.10.10
When all nodes are updated, and the system is updated, the
IBM_Storwize:V7K01:admin>lsupdate status enclosures event_sequence_number progress 0 estimated_completion_time 210206130607 suggested_action wait system_new_code_level system_forced no system_next_node_status none system_next_node_time system_next_node_id system_next_node_name system_next_pause_time
New code is active on the system
IBM_Storwize:V7K01:admin>lssystem | grep code code_level 8.3.1.3 (build 150.25.2012041757000)
Check and maybe update drives firmwares