This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
aix:vios_ssp [2023/04/04 01:17] manu |
aix:vios_ssp [2025/11/17 18:01] (current) manu |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Using Shared Storage Pools on VIOS ====== | ====== Using Shared Storage Pools on VIOS ====== | ||
| + | |||
| + | https://github.com/nigelargriffiths/Shared-Storage-Pool-Tools | ||
| + | |||
| + | https://www.ibm.com/support/pages/shared-storage-pool-ssp-best-practice | ||
| + | |||
| + | https://www.ibm.com/support/pages/shared-storage-pools-cheat-sheet | ||
| + | |||
| + | ===== What is a SSP ? ===== | ||
| + | |||
| + | It can be compared to a datastore into the VMware world, with some differences | ||
| + | |||
| + | {{:aix:vios_ssp01.png?800|}} | ||
| + | |||
| + | ===== Stop/start a cluster SSP ===== | ||
| + | |||
| + | To take a node offline for maintenance, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -stop -n clustername -m nodeA | ||
| + | </cli> | ||
| + | To bring the node back online after maintenance is completed, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -start -n clustername -m nodeA | ||
| + | </cli> | ||
| + | To take all the nodes offline for maintenance, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -stop -n clustername -a | ||
| + | </cli> | ||
| + | To bring all the nodes back online after maintenance is completed, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -start -n clustername -a | ||
| + | </cli> | ||
| + | | ||
| + | ===== Custom script ===== | ||
| So here is a quick one line script "ncluster" | So here is a quick one line script "ncluster" | ||
| Line 74: | Line 107: | ||
| Here is another example after the SSP5 has noticed all the VIOS are at the higher level | Here is another example after the SSP5 has noticed all the VIOS are at the higher level | ||
| + | <cli prompt='$'> | ||
| No State Repos Pool Role ---Upgrade-Status--- Node-Name | No State Repos Pool Role ---Upgrade-Status--- Node-Name | ||
| 1 OK OK OK 2.2.4.10 ON_LEVEL bronzevios1.aixncc.uk.ibm.com | 1 OK OK OK 2.2.4.10 ON_LEVEL bronzevios1.aixncc.uk.ibm.com | ||
| Line 81: | Line 114: | ||
| 4 OK OK OK 2.2.4.10 ON_LEVEL orangevios1.aixncc.uk.ibm.com | 4 OK OK OK 2.2.4.10 ON_LEVEL orangevios1.aixncc.uk.ibm.com | ||
| 5 OK OK OK DBN 2.2.4.10 ON_LEVEL redvios1.aixncc.uk.ibm.com | 5 OK OK OK DBN 2.2.4.10 ON_LEVEL redvios1.aixncc.uk.ibm.com | ||
| - | $ | + | </cli> |
| Line 153: | Line 186: | ||
| This works on BOTH VIOS 2.2.3 (random order) and the future VIOS 2.2.4 (Tier then LU name order) and you get: | This works on BOTH VIOS 2.2.3 (random order) and the future VIOS 2.2.4 (Tier then LU name order) and you get: | ||
| - | VIOS 2.2.3.x | + | <code> |
| - | + | ||
| - | SizeMB UsedMB Used% Type Tier Name | + | |
| - | 1048576 971779 92% THIN SYSTEM purple3files | + | |
| - | 65536 65536 100% THICK SYSTEM orange5a | + | |
| - | 32768 19956 60% THIN SYSTEM vm34 | + | |
| - | 32768 32768 100% THIN SYSTEM vm61b | + | |
| - | 32768 3392 10% THIN SYSTEM AIX735_b | + | |
| - | 32768 27016 82% THIN SYSTEM vm16boot | + | |
| - | 65536 26563 40% THIN SYSTEM ruby32data1 | + | |
| - | 65536 0 0% THIN SYSTEM emerald3 | + | |
| - | 25600 0 0% THIN SYSTEM volume-orange5_data1 | + | |
| - | + | ||
| - | VIOS 2.2.4 | + | |
| SizeMB UsedMB Used% Type Tier Name | SizeMB UsedMB Used% Type Tier Name | ||
| 32768 0 0% THIN test testa | 32768 0 0% THIN test testa | ||
| Line 177: | Line 196: | ||
| 40960 2562 6% THIN prod vm96boot | 40960 2562 6% THIN prod vm96boot | ||
| 8256 26 0% THIN prod vm96data | 8256 26 0% THIN prod vm96data | ||
| + | </code> | ||
| Note: I have two Tiers here and they are ordered first. | Note: I have two Tiers here and they are ordered first. | ||
| Line 194: | Line 213: | ||
| + | <cli prompt='$'> | ||
| $ nlu -? | $ nlu -? | ||
| /home/padmin/nlu Nigel's lu command with improved layout and column ordering | /home/padmin/nlu Nigel's lu command with improved layout and column ordering | ||
| /home/padmin/nlu [-sizemb | -usedmb | -used | -type | -tier | -name (default)] | /home/padmin/nlu [-sizemb | -usedmb | -used | -type | -tier | -name (default)] | ||
| - | $ | + | </cli> |
| Example default output by LU Name - my favourite default | Example default output by LU Name - my favourite default | ||
| + | <cli prompt='$'> | ||
| $ nlu | $ nlu | ||
| SizeMB UsedMB Used% Type Tier Name | SizeMB UsedMB Used% Type Tier Name | ||
| Line 212: | Line 231: | ||
| 38912 2562 6% THIN test vm97boot | 38912 2562 6% THIN test vm97boot | ||
| 8256 23 0% THIN test vm97data | 8256 23 0% THIN test vm97data | ||
| - | $ | + | </cli> |
| Example output reordered by column | Example output reordered by column | ||
| + | <cli prompt='$'> | ||
| $ nlu -sizemb | $ nlu -sizemb | ||
| SizeMB UsedMB Used% Type Tier Name | SizeMB UsedMB Used% Type Tier Name | ||
| Line 246: | Line 265: | ||
| 38912 2562 6% THIN test vm97boot | 38912 2562 6% THIN test vm97boot | ||
| 8256 23 0% THIN test vm97data | 8256 23 0% THIN test vm97data | ||
| + | </cli> | ||
| Here is the actual ksh script for nlu that is Nigel's New lu command | Here is the actual ksh script for nlu that is Nigel's New lu command | ||
| Line 264: | Line 284: | ||
| ===== Pb with SSP ===== | ===== Pb with SSP ===== | ||
| + | |||
| + | http://gibsonnet.net/blog/cgaix/html/vios_ssp_wont_start.html | ||
| + | |||
| Remove cluster: | Remove cluster: | ||
| Line 275: | Line 298: | ||
| Then reboot | Then reboot | ||
| + | |||
| + | | ||
| ===== Where is store the config of SSP ===== | ===== Where is store the config of SSP ===== | ||
| In the folder | In the folder | ||
| /var/vio/SSP/ | /var/vio/SSP/ | ||
| + | |||
| + | ===== Convert a LU thick to thin in a SSP ===== | ||
| + | |||
| + | Use nslim ... | ||
| + | |||
| + | https://www.ibm.com/support/pages/shared-storage-pools-hands-fun-virtual-disks-lu-example | ||
| + | |||
| + | <code> | ||
| + | $ ./nslim -? | ||
| + | Usage: ./nslim (v4) is a filter style program using stdin & stdout | ||
| + | It will thinly write a file (only copy non-zero blocks) | ||
| + | It uses 1MB blocks | ||
| + | If a block is zero-filled then it is skipped using lseek() | ||
| + | If a block has data then it will write() the block unchanged | ||
| + | Example: | ||
| + | ./nslim <AIX.lu >SSP-LU-name | ||
| + | Flags: | ||
| + | -v for verbose output for every block you get a W=write or .=lseek on stderr | ||
| + | -V for verbose output on each GB you get count of written or skipped blocks | ||
| + | ./nslim -v <AIX.lu >SSP-LU-name | ||
| + | this gives you visual feedback on progress | ||
| + | -t like verbose but does NOT actually write anything to stdout | ||
| + | this lets you passively see the mix of used and unused blocks | ||
| + | ./nslim -t <AIX.lu | ||
| + | -h or -? outputs this helpful message! | ||
| + | Warning: | ||
| + | Get the redirection wrong and you will destroy your LU data | ||
| + | </code> | ||
| + | |||
| + | ===== Removing Shared Storage Pool (SSP) Node From SSP Cluster With Mappings ===== | ||
| + | |||
| + | https://www.ibm.com/support/pages/removing-shared-storage-pool-ssp-node-ssp-cluster-mappings | ||
| + | |||
| + | ===== Node Types (roles) In Shared Storage Pool Cluster ===== | ||
| + | |||
| + | In a SSP cluster, some nodes perform key roles handled in certain layers. These layers are classified as: | ||
| + | * Cluster Aware AIX (CAA) | ||
| + | * Database (DBN) | ||
| + | * Message Format Service (MFS) | ||
| + | |||
| + | === How to determine current CAA LEADER node === | ||
| + | |||
| + | <cli prompt='#'> | ||
| + | $ oem_setup_env | ||
| + | # pooladm dump node | grep -i leader | ||
| + | whichever one returns "amILeader=1" is the leader node. | ||
| + | </cli> | ||
| + | |||
| + | === How to determine current MFS node === | ||
| + | |||
| + | <cli prompt='#'> | ||
| + | $ oem_setup_env | ||
| + | # pooladm pool lsmfs /var/vio/SSP/[CLUSTER_NAME]/D_E_F_A_U_L_T_061310 | ||
| + | </cli> | ||
| + | |||
| + | === How to determine DBN node === | ||
| + | |||
| + | <cli prompt='$'> | ||
| + | $ cluster -status -verbose | grep -p DBN | ||
| + | </cli> | ||