This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
aix:vios_ssp [2023/08/16 23:22] manu [Where is store the config of SSP] |
aix:vios_ssp [2025/11/17 18:01] (current) manu |
||
|---|---|---|---|
| Line 4: | Line 4: | ||
| https://www.ibm.com/support/pages/shared-storage-pool-ssp-best-practice | https://www.ibm.com/support/pages/shared-storage-pool-ssp-best-practice | ||
| + | |||
| + | https://www.ibm.com/support/pages/shared-storage-pools-cheat-sheet | ||
| + | |||
| + | ===== What is a SSP ? ===== | ||
| + | |||
| + | It can be compared to a datastore into the VMware world, with some differences | ||
| + | |||
| + | {{:aix:vios_ssp01.png?800|}} | ||
| + | |||
| + | ===== Stop/start a cluster SSP ===== | ||
| + | |||
| + | To take a node offline for maintenance, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -stop -n clustername -m nodeA | ||
| + | </cli> | ||
| + | To bring the node back online after maintenance is completed, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -start -n clustername -m nodeA | ||
| + | </cli> | ||
| + | To take all the nodes offline for maintenance, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -stop -n clustername -a | ||
| + | </cli> | ||
| + | To bring all the nodes back online after maintenance is completed, type the command as follows: | ||
| + | <cli prompt='$'> | ||
| + | $ clstartstop -start -n clustername -a | ||
| + | </cli> | ||
| + | | ||
| + | ===== Custom script ===== | ||
| So here is a quick one line script "ncluster" | So here is a quick one line script "ncluster" | ||
| Line 255: | Line 284: | ||
| ===== Pb with SSP ===== | ===== Pb with SSP ===== | ||
| + | |||
| + | http://gibsonnet.net/blog/cgaix/html/vios_ssp_wont_start.html | ||
| + | |||
| Remove cluster: | Remove cluster: | ||
| Line 266: | Line 298: | ||
| Then reboot | Then reboot | ||
| + | |||
| + | | ||
| ===== Where is store the config of SSP ===== | ===== Where is store the config of SSP ===== | ||
| Line 277: | Line 311: | ||
| https://www.ibm.com/support/pages/shared-storage-pools-hands-fun-virtual-disks-lu-example | https://www.ibm.com/support/pages/shared-storage-pools-hands-fun-virtual-disks-lu-example | ||
| + | <code> | ||
| + | $ ./nslim -? | ||
| + | Usage: ./nslim (v4) is a filter style program using stdin & stdout | ||
| + | It will thinly write a file (only copy non-zero blocks) | ||
| + | It uses 1MB blocks | ||
| + | If a block is zero-filled then it is skipped using lseek() | ||
| + | If a block has data then it will write() the block unchanged | ||
| + | Example: | ||
| + | ./nslim <AIX.lu >SSP-LU-name | ||
| + | Flags: | ||
| + | -v for verbose output for every block you get a W=write or .=lseek on stderr | ||
| + | -V for verbose output on each GB you get count of written or skipped blocks | ||
| + | ./nslim -v <AIX.lu >SSP-LU-name | ||
| + | this gives you visual feedback on progress | ||
| + | -t like verbose but does NOT actually write anything to stdout | ||
| + | this lets you passively see the mix of used and unused blocks | ||
| + | ./nslim -t <AIX.lu | ||
| + | -h or -? outputs this helpful message! | ||
| + | Warning: | ||
| + | Get the redirection wrong and you will destroy your LU data | ||
| + | </code> | ||
| + | ===== Removing Shared Storage Pool (SSP) Node From SSP Cluster With Mappings ===== | ||
| + | |||
| + | https://www.ibm.com/support/pages/removing-shared-storage-pool-ssp-node-ssp-cluster-mappings | ||
| + | |||
| + | ===== Node Types (roles) In Shared Storage Pool Cluster ===== | ||
| + | |||
| + | In a SSP cluster, some nodes perform key roles handled in certain layers. These layers are classified as: | ||
| + | * Cluster Aware AIX (CAA) | ||
| + | * Database (DBN) | ||
| + | * Message Format Service (MFS) | ||
| + | |||
| + | === How to determine current CAA LEADER node === | ||
| + | |||
| + | <cli prompt='#'> | ||
| + | $ oem_setup_env | ||
| + | # pooladm dump node | grep -i leader | ||
| + | whichever one returns "amILeader=1" is the leader node. | ||
| + | </cli> | ||
| + | |||
| + | === How to determine current MFS node === | ||
| + | |||
| + | <cli prompt='#'> | ||
| + | $ oem_setup_env | ||
| + | # pooladm pool lsmfs /var/vio/SSP/[CLUSTER_NAME]/D_E_F_A_U_L_T_061310 | ||
| + | </cli> | ||
| + | |||
| + | === How to determine DBN node === | ||
| + | |||
| + | <cli prompt='$'> | ||
| + | $ cluster -status -verbose | grep -p DBN | ||
| + | </cli> | ||