User Tools

Site Tools


gpfs:gpfs_cmd

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gpfs:gpfs_cmd [2021/11/24 10:29]
manu
gpfs:gpfs_cmd [2023/03/14 22:10] (current)
manu
Line 1: Line 1:
 ====== GPFS commands ====== ====== GPFS commands ======
  
 +=== mmuserauth ===
 +
 +  mmuserauth service check [--data-access-method {file|object|all}] [-r|--rectify]
 +            [-N|--nodes {node-list|cesNodes}][--server-reachability]
 +            ​
 +  --server-reachability:​ correct config errors
 +            ​
 === mmchnode === === mmchnode ===
  
Line 29: Line 36:
     Removes one or more groups from the specified nodes.     Removes one or more groups from the specified nodes.
 </​code> ​   </​code> ​  
 +
 +<cli prompt='#'>​
 +[root@prscale-a-02 ~]# mmcloudgateway node list
 + ​Node ​ Cloud node name                  Cloud Node Class
 +--------------------------------------------------------
 +    1  prscale-a-01 ​                    ​tct_group1
 +</​cli>​
 +<cli prompt='#'>​
 +[root@prscale-a-02 ~]# mmchnode --cloud-gateway-disable -N prscale-a-01 --cloud-gateway-nodeclass tct_group1
 +</​cli>​
 +
 +<cli prompt='#'>​
 +[root@prscale-a-01 ~]# mmcloudgateway service status
 +
 +=============================
 +Cloud Node Class: ​ tct_group1
 +=============================
 +
 + Cloud Service ​               Status ​     Reason
 +------------------------------------------------
 + ​tct_tiering1-vault_backup_01 ​ ENABLED
 +
 +                                           ​Server ​    ​Account /   ​Container /
 + ​Node ​ Daemon node name                    Status ​    ​CSAP ​       File System/​Set ​              ​Status ​   Reasons
 +-----------------------------------------------------------------------------------------------------------------
 +    1  prscale-a-01 ​                       STARTED ​   vault_backup_01
 +                                                      vault_backup_01 ​                               ONLINE
 +</​cli>​
 +<cli prompt='#'>​
 +[root@prscale-a-01 ~]# mmcloudgateway cloudStorageAccessPoint list
 +
 +Configured cloudStorageAccessPoint options from node class tct_group1:
 +----------------------------------------------------------------------
 +  csapName ​                       :  vault_backup_01
 +  accountName ​                    : ​ vault_backup_01
 +  url                             : ​ http://​10.0.0.1
 +  mpuPartsSize ​                   :  134217728
 +  sliceSize ​                      : ​ 524288
 +  proxyIp ​                        :
 +  proxyPort ​                      :
 +  region ​                         :
 +</​cli>​
 +<cli prompt='#'>​
 +[root@prscale-a-01 ~]# mmcloudgateway account list
 +
 +Configured account options from node class tct_group1:
 +------------------------------------------------------
 +  accountName ​                    : ​ vault_backup_01
 +  cloudType ​                      : ​ s3
 +  userName ​                       :  s4KhHAJtjQ2xxxxxxxx
 +  tenantId ​                       :
 +</​cli>​
 +<cli prompt='#'>​
 +[root@prscale-a-01 ~]# mmcloudgateway service status
 +
 +=============================
 +Cloud Node Class: ​ tct_group1
 +=============================
 +
 + Cloud Service ​               Status ​     Reason
 +------------------------------------------------
 + ​tct_tiering1-vault_backup_01 ​ ENABLED
 +
 +                                           ​Server ​    ​Account /   ​Container /
 + ​Node ​ Daemon node name                    Status ​    ​CSAP ​       File System/​Set ​              ​Status ​   Reasons
 +-----------------------------------------------------------------------------------------------------------------
 +    1  prscale-a-01 ​                       STARTED ​   vault_backup_01
 +                                                      vault_backup_01 ​                               ONLINE
 +[root@prscale-a-01 ~]# mmcloudgateway cloudService delete ​   --cloud-nodeclass tct_group1 --cloud-service-name tct_tiering1-vault_backup_01
 +mmcloudgateway:​ Sending the command to the first successful node starting with prscale-a-01
 +mmcloudgateway:​ This may take a while...
 +mmcloudgateway:​ Command completed successfully on prscale-a-01.
 +mmcloudgateway:​ Command completed.
 +</​cli>​
 +
 +[root@prscale-a-02 ~]# mmdelnode -N prscale-a-01
 +mmdelnode: Node prscale-a-01 still appears in the following callbacks:
 +        CLOUDGWnotify_1
 +mmdelnode: Command failed. Examine previous error messages to determine cause.
 +
 +
 +[root@prscale-a-02 ~]# mmlscallback -Y
 +mmlscallback::​HEADER:​version:​reserved:​reserved:​identifier:​command:​priority:​sync:​timeout:​event:​node:​parms:​onError:​object:​
 +mmlscallback::​0:​1:::​GUI_CCR_CHANGE:/​usr/​lpp/​mmfs/​gui/​callbacks/​global/​ccrChangedCallback_421.sh::​false::​ccrFileChange:​GUI_MGMT_SERVERS:​root %25eventName %25ccrObjectName %25ccrObjectVersion::​gui_master_node,​_gui.user.repo,​_gui.snapshots,​_gui.notification,​_gui.settings,​_gui.dashboards,​_gui.keystore,​_gui.keystore_settings,​_gui.ldap_settings,​_gui.policysettings,​gui_jobs,​mmsdrfs,​spectrum-scale-object-policies.conf:​
 +mmlscallback::​0:​1:::​GUI_CM_TAKEOVER:/​usr/​lpp/​mmfs/​gui/​callbacks/​global/​cmTakeOverCallback_421.sh::​false::​clusterManagerTakeover:​GUI_MGMT_SERVERS:​root %25eventName %25eventNode %25clusterManager:::​
 +mmlscallback::​0:​1:::​GPFS_STARTUP_SHUTDOWN:/​usr/​lpp/​mmfs/​gui/​callbacks/​global/​startupShutdownCallback_421.sh::​false::​shutdown,​startup:​GUI_MGMT_SERVERS:​root %25eventName %25eventNode:::​
 +mmlscallback::​0:​1:::​GUI_NODES:/​usr/​lpp/​mmfs/​gui/​callbacks/​global/​NodeQuorumCallback_421.sh::​false::​nodeJoin,​nodeLeave,​quorumLoss,​quorumNodeJoin,​quorumNodeLeave,​quorumReached:​GUI_MGMT_SERVERS:​root %25eventName %25eventNode %25quorumNodes %25downNodes %25upNodes %25clusterName:::​
 +mmlscallback::​0:​1:::​GUI_DISK_SPACE:/​usr/​lpp/​mmfs/​lib/​gui_enablement/​diskSpaceCallback::​false::​lowDiskSpace,​noDiskSpace:​GUI_SERVERS:​root %25eventName %25eventNode %25reason %25fsName %25filesetName %25storagePool:::​
 +mmlscallback::​0:​1:::​GUI_MOUNT_ACTION:/​usr/​lpp/​mmfs/​lib/​gui_enablement/​mountActionCallback::​false::​mount,​unmount:​GUI_SERVERS:​root %25eventName %25eventNode %25fsName:::​
 +mmlscallback::​0:​1:::​GUI_THRESHOLD_MIGRATION:/​usr/​lpp/​mmfs/​lib/​gui_enablement/​thresholdMigration::​false::​lowDiskSpace,​noDiskSpace:​GUI_SERVERS:​root %25fsName %25storagePool %25reason %25eventNode:::​
 +mmlscallback::​0:​1:::​CLOUDGWnotify_1:/​opt/​ibm/​MCStore/​scripts/​mcstorenotify::​true::​startup,​preShutdown,​quorumLoss:​10.0.0.1:​%25eventName -N prscale-a-01 tct_group1:::​
 +
 +[root@prscale-a-02 ~]# mmdelcallback CLOUDGWnotify_1
 +mmdelcallback:​ Propagating the cluster configuration data to all
 +  affected nodes. ​ This is an asynchronous process.
 +[root@prscale-a-02 ~]#
 +
  
 === mmperfmon === === mmperfmon ===
Line 34: Line 138:
 mmperfmon config show  mmperfmon config show 
  
 +Do not edit the config file, automatically generated
 +  /​opt/​IBM/​zimon/​ZIMonCollector.cfg
  
-/​opt/​IBM/​zimon/​ZIMonCollector.cfg +<cli prompt='#'>​
 [root@prscale-a-01 ~]# systemctl restart pmcollector.service [root@prscale-a-01 ~]# systemctl restart pmcollector.service
 +</​cli>​
  
 +<cli prompt='#'>​
 +[root@prscale-a-01 ~]# mmperfmon config show --config-file OutputFile.txt
 +</​cli>​
  
 +Edit the output file to remove or add parameters or nodes 
 +<cli prompt='#'>​ 
 +[root@prscale-a-01 ~]# mmperfmon config update --config-file OutputFile.txt 
 +</​cli>​ 
 +<cli prompt='#'>​ 
 +[root@prscale-a-01 ~]# mmdelnode -N prscale-a-02 
 +Verifying GPFS is stopped on all affected nodes ... 
 +mmdelnode: Removing GPFS system files on all deleted nodes ... 
 +mmdelnode: Command successfully completed 
 +mmdelnode: Propagating the cluster configuration data to all 
 +  affected nodes. ​ This is an asynchronous process. 
 +</​cli>​
  
  
gpfs/gpfs_cmd.1637746186.txt.gz · Last modified: 2021/11/24 10:29 by manu