==== Recover a SVC / v3k / v5k / v7k ====
démarrage node 1
578:
On the active node, connect to service IP https://sevice_IP/service
--> select the first node
--> manage system
--> remove system data on both nodes, and reboot if not candidate
reboot node1
recover system
prepare for recovery (5-15 min)
IBM_2076:v7000-c1:superuser>sainfo lscmdstatus
last_command /compass/bin/satask metadata -scan -file t3auto1.xml -disk 5000c50059d77c3f000000000000000000000000000000000000000000000000 -start 584908800
last_command_status CMMVC8044I Command completed successfully.
T3_status Preparing
T3_status_data Scan qdisk 1
cpfiles_status
cpfiles_status_data
snap_status
snap_filename
IBM_2076:v7000-c1:superuser>sainfo lscmdstatus
last_command /compass/bin/satask metadata -scan -file t3auto2.xml -disk 5000c50059dd95d3000000000000000000000000000000000000000000000000 -start 584908800
last_command_status CMMVC8044I Command completed successfully.
T3_status Prepare complete
T3_status_data Backup date 20130128 01:00 : quorum time 20130128 10:18
--> recover (allow popup on Web browser)
IBM_2076:v7000-c1:superuser>sainfo lscmdstatus
last_command /compass/bin/satask metadata -dump -disk 5000c50059df01a3000000000000000000000000000000000000000000000000 -start 584908800
last_command_status CMMVC8044I Command completed successfully.
T3_status Executing
T3_status_data Dump metadata
IBM_2076:Cluster_172.31.12.157:superuser>sainfo lscmdstatus
last_command /compass/bin/satask restartservice -service tomcat
last_command_status CMMVC8044I Command completed successfully.
T3_status Executing
T3_status_data recover -prepare
Wait data will be recover step by step
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity raid_status raid_level redundancy strip_size tier
0 online 0 mdiskgrp0 1.9TB syncing raid5 0 256 generic_hdd
1 online 1 mdiskgrp1 1.9TB syncing raid5 0 256 generic_hdd
2 online 2 mdiskgrp2 1.9TB syncing raid5 0 256 generic_hdd
3 online 3 mdiskgrp3 1.9TB syncing raid5 0 256 generic_hdd
4 online 4 mdiskgrp4 1.9TB syncing raid5 0 256 generic_hdd
5 online 5 mdiskgrp5 1.9TB syncing raid5 0 256 generic_hdd
6 online 6 mdiskgrp6 1.9TB syncing raid5 0 256 generic_hdd
7 online 7 mdiskgrp7 1.9TB syncing raid5 0 256 generic_hdd
8 online 8 mdiskgrp8 4.9TB online raid5 1 256 generic_hdd
9 online 9 mdiskgrp9 4.9TB online raid5 1 256 generic_hdd
10 online 10 mdiskgrp10 4.9TB online raid5 1 256 generic_hdd
11 online 11 mdiskgrp11 4.9TB online raid5 1 256 generic_hdd
12 online 12 mdiskgrp12 4.9TB online raid5 1 256 generic_hdd
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lsli
lslicense lslivedump
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lslicense
used_flash 0.00
used_remote 0.00
used_virtualization 0.00
license_flash 0
license_remote 0
license_virtualization 0
license_physical_disks 0
license_physical_flash off
license_physical_remote off
used_compression_capacity 0.00
license_compression_capacity 0
license_compression_enclosures 0
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lsh
lshbaportcandidate lshost lshostiogrp lshostvdiskmap
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lshost
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity raid_status raid_level redundancy strip_size tier
0 m_v7000_c1_300_01 online 0 p_v7000_c1_300_01 1.9TB syncing raid5 0 256 generic_hdd
1 m_v7000_c1_300_02 online 1 p_v7000_c1_300_02 1.9TB syncing raid5 0 256 generic_hdd
2 m_v7000_c1_300_03 online 2 p_v7000_c1_300_03 1.9TB syncing raid5 0 256 generic_hdd
3 m_v7000_c1_300_04 online 3 p_v7000_c1_300_04 1.9TB syncing raid5 0 256 generic_hdd
4 m_v7000_c1_300_05 online 4 p_v7000_c1_300_05 1.9TB syncing raid5 0 256 generic_hdd
5 m_v7000_c1_300_06 online 5 p_v7000_c1_300_06 1.9TB syncing raid5 0 256 generic_hdd
6 m_v7000_c1_300_07 online 6 p_v7000_c1_300_07 1.9TB syncing raid5 0 256 generic_hdd
7 m_v7000_c1_300_08 online 7 p_v7000_c1_300_08 1.9TB syncing raid5 0 256 generic_hdd
8 m_v7000_c1_900_01 online 8 p_v7000_c1_900_01 4.9TB online raid5 1 256 generic_hdd
9 m_v7000_c1_900_02 online 9 p_v7000_c1_900_02 4.9TB online raid5 1 256 generic_hdd
10 m_v7000_c1_900_03 online 10 p_v7000_c1_900_03 4.9TB online raid5 1 256 generic_hdd
11 m_v7000_c1_900_04 online 11 p_v7000_c1_900_04 4.9TB online raid5 1 256 generic_hdd
12 m_v7000_c1_900_05 online 12 p_v7000_c1_900_05 4.9TB online raid5 1 256 generic_hdd
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lshost
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lshost
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier
0 m_v7000_c1_300_01 online array 0 p_v7000_c1_300_01 1.9TB generic_hdd
1 m_v7000_c1_300_02 online array 1 p_v7000_c1_300_02 1.9TB generic_hdd
2 m_v7000_c1_300_03 online array 2 p_v7000_c1_300_03 1.9TB generic_hdd
3 m_v7000_c1_300_04 online array 3 p_v7000_c1_300_04 1.9TB generic_hdd
4 m_v7000_c1_300_05 online array 4 p_v7000_c1_300_05 1.9TB generic_hdd
5 m_v7000_c1_300_06 online array 5 p_v7000_c1_300_06 1.9TB generic_hdd
6 m_v7000_c1_300_07 online array 6 p_v7000_c1_300_07 1.9TB generic_hdd
7 m_v7000_c1_300_08 online array 7 p_v7000_c1_300_08 1.9TB generic_hdd
8 m_v7000_c1_900_01 online array 8 p_v7000_c1_900_01 4.9TB generic_hdd
9 m_v7000_c1_900_02 online array 9 p_v7000_c1_900_02 4.9TB generic_hdd
10 m_v7000_c1_900_03 online array 10 p_v7000_c1_900_03 4.9TB generic_hdd
11 m_v7000_c1_900_04 online array 11 p_v7000_c1_900_04 4.9TB generic_hdd
12 m_v7000_c1_900_05 online array 12 p_v7000_c1_900_05 4.9TB generic_hdd
IBM_2076:Cluster_172.31.12.157:superuser>svcinfo lshost
id name port_count iogrp_count status
0 svc-c1-ctrl1 2 4 online
1 svc-c1-ctrl2 2 4 online
You have to wait the end of recovery to have a full operational SVC.
==== Post procedure ====
=== Recovering from offline VDisks using the CLI ===
Any volumes that are offline and are not thin-provisioned (or compressed) volumes are offline because of the loss of write-cache data during the event that led all node canisters to lose their cluster state. **Any data lost from the write-cache cannot be recovered.** These volumes might need additional recovery steps after the volume is brought back online.
* Delete all IBM FlashCopy function mappings and Metro Mirror or Global Mirror relationships that use the offline volumes.
* Run the **recovervdisk** or **recovervdiskbysystem** command. (This will only bring the volume back online so that you can attempt to deal with the data loss.)