User Tools

Site Tools


gpfs:install_scale_linux

Spectrum Scale installation

If you have a cluster still running, you can populate the config file used by the spectrumscale command

https://www.ibm.com/docs/en/spectrum-scale/5.1.0?topic=reference-spectrumscale-command

Definition file location (depend on version):

/usr/lpp/mmfs/5.1.3.1/ansible-toolkit/ansible/ibm-spectrum-scale-install-infra/vars/scale_clusterdefinition.json

install

  Installs, creates a GPFS cluster, creates NSDs and adds nodes to an existing GPFS cluster. The installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration steps have been completed, this option can be run with no arguments (and pre-install and post-install checks will be performed automatically). 

If you want to use only the file systems, or deploy protocols issue these commands:

deploy

  Creates file systems, deploys protocols, and configures protocol authentication on an existing GPFS cluster. The installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration steps have been completed, this option can be run with no arguments (and pre-deploy and post-deploy checks will be performed automatically). 

upgrade

  Performs upgrade procedure, upgrade precheck, upgrade postcheck, and upgrade related configuration to add nodes as offline, or exclude nodes from the upgrade run.
[root@prscale-a-01 ansible-toolkit]# pwd
/usr/lpp/mmfs/5.1.1.0/ansible-toolkit
[root@prscale-a-01 ansible-toolkit]#  ./spectrumscale config populate -N prscale-a-01
[ INFO  ] Logging to file: /usr/lpp/mmfs/5.1.1.0/ansible-toolkit/logs/config-populate-02-06-2021_13:17:24.log
[ INFO  ] Detected scale_clusterdefinition.json file present in directory /usr/lpp/mmfs/5.1.1.0/ansible-toolkit/ansible/ibm-spectrum-scale-install-infra/vars.
Installer will keep backup of existing scale_clusterdefinition.json file in /usr/lpp/mmfs/5.1.1.0/ansible-toolkit/ansible/ibm-spectrum-scale-install-infra/vars path and populate a new one. Do you want to continue  [Y/n]: Y
[ INFO  ] Updating existing configuration. It may take few moments ....
[ INFO  ] Getting installer path
[ INFO  ] Populating protocols detail
[ INFO  ] Populating enabled protocols detail
[ INFO  ] Populating export ip of ces nodes
[ INFO  ] Populating interface details
[ INFO  ] Adding admin node into configuration
...

Read the config:

[root@prscale-a-01 ansible-toolkit]# ./spectrumscale node list
[ INFO  ] List of nodes in current configuration:
[ INFO  ] [Installer Node]
[ INFO  ] 10.0.0.10
[ INFO  ]
[ INFO  ] [Cluster Details]
[ INFO  ] Name: scale_01.cluster
[ INFO  ] Setup Type: Spectrum Scale
[ INFO  ]
[ INFO  ] [Protocols]
[ INFO  ] Object : Disabled
[ INFO  ] SMB    : Enabled
[ INFO  ] NFS    : Enabled
[ INFO  ] HDFS   : Disabled
[ INFO  ]
[ INFO  ] [Extended Features]
[ INFO  ] File Audit logging     : Disabled
[ INFO  ] Watch folder           : Disabled
[ INFO  ] Management GUI         : Enabled
[ INFO  ] Performance Monitoring : Enabled
[ INFO  ] Callhome               : Disabled
[ INFO  ]
[ INFO  ] GPFS         Admin  Quorum  Manager   NSD   Protocol   GUI    Perf Mon    OS   Arch
[ INFO  ] Node          Node   Node     Node   Server   Node    Server Collector
[ INFO  ] prscale-a-01   X       X       X       X        X       X        X      rhel8  x86_64
[ INFO  ] prscale-a-02           X               X                                rhel8  x86_64
[ INFO  ] prscale-b-01   X       X       X       X        X       X        X      rhel8  x86_64
[ INFO  ]
[ INFO  ] [Export IP address]
[ INFO  ] 10.0.0.14 (pool)
[ INFO  ] 10.0.0.15 (pool)
Example of readying Red Hat Linux nodes for Spectrum Scale installation and deployment of protocols
Configure promptless SSH (promptless ssh is required)
# ssh-keygen (if using RHEL 8.x, make sure to run ssh-keygen -m PEM or else the install toolkit will have issues with node logins)
# ssh-copy-id <FQDN of node>
# ssh-copy-id <IP of node>
# ssh-copy-id <non-FQDN hostname of node>
repeat on all nodes to all nodes, including current node
Turn off firewalls (alternative is to open ports specific to each Spectrum Scale functionality)
# systemctl stop firewalld
# systemctl disable firewalld
repeat on all nodes
How to check if a yum repository is configured correctly
# yum repolist -> should return no errors. It must also show an RHEL7.x base repository. Other repository possibilities include a satellite site, a custom yum repository, an
RHELx.x DVD iso, an RHELx.x physical DVD.
Use the included local-repo tool to spin up a repository for a base OS DVD (this tool works on RHEL, Ubuntu, SLES)
# cd /usr/lpp/mmfs/5.1.0.x/tools/repo
# cat readme_local-repo | more
# ./local-repo --mount default --iso /root/RHEL7.9.iso
What if I don't want to use the Install Toolkit - how do I get a repository for all the Spectrum Scale rpms?
# cd /usr/lpp/mmfs/5.1.0.x/tools/repo
# ./local-repo --repo
# yum repolist
Pre-install pre-req rpms to make installation and deployment easier
# yum install kernel-devel cpp gcc gcc-c++ glibc sssd ypbind openldap-clients krb5-workstation
Turn off selinux (or set to permissive mode)
# sestatus
# vi /etc/selinux/config
change SELINUX=xxxxxx to SELINUX=disabled
save and reboot
repeat on all nodes
Setup a default path to Spectrum Scale commands (not required)
# vi /root/.bash_profile
——add this line——
export PATH=$PATH:/usr/lpp/mmfs/bin
——save/exit——
logout and back in for changes to take effect

Spectrum Scale installation

Example of a new Spectrum Scale cluster installation followed by a protocol deployment
Install Toolkit commands for Installation:
- Toolkit is running from cluster-node1 with an internal cluster network IP of 10.11.10.11, which all nodes can reach
cd /usr/lpp/mmfs/5.1.0.x/installer/
./spectrumscale setup -s 10.11.10.11
./spectrumscale node add cluster-node1 -a -g
./spectrumscale node add cluster-node2 -a -g
./spectrumscale node add cluster-node3
./spectrumscale node add cluster-node4
./spectrumscale node add cluster-node5 -n
./spectrumscale node add cluster-node6 -n
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs cesSharedRoot -fg 1 "/dev/sdb"
./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs cesSharedRoot -fg 2 "/dev/sdc"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 1 "/dev/sdd"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 1 "/dev/sde"
./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 2 "/dev/sdf"
./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 2 "/dev/sdg"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 1 "/dev/sdh"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 1 "/dev/sdi"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 2 "/dev/sdj"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 2 “/dev/sdk"
./spectrumscale config perfmon -r on
./spectrumscale config ntp -e on -s ntp_server1,ntp_server2,ntp_server3
./spectrumscale callhome enable <- If you prefer not to enable callhome, change the enable to a disable
./spectrumscale callhome config -n COMPANY_NAME -i COMPANY_ID -cn MY_COUNTRY_CODE -e MY_EMAIL_ADDRESS
./spectrumscale config gpfs -c mycluster
./spectrumscale node list
./spectrumscale install --precheck
./spectrumscale install
Install Outcome: A 6node Spectrum Scale cluster with active NSDs
2 GUI nodes
2 NSD nodes
2 client nodes
10 NSDs
configured performance monitoring
callhome configured
**3 file systems defined, each with 2 failure groups. File systems will not be created until a deployment**

Spectrum Scale installation

Install Toolkit commands for Protocol Deployment (assumes cluster created from above configuration./
- Toolkit is running from the same node that performed the install above, cluster-node1
./spectrumscale node add cluster-node3 -p
./spectrumscale node add cluster-node4 -p
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
./spectrumscale enable nfs
./spectrumscale enable smb
./spectrumscale enable object
./spectrumscale config object -e mycluster-ces
./spectrumscale config object -o Object_Fileset
./spectrumscale config object -f ObjectFS -m /ibm/ObjectFS
./spectrumscale config object -au admin -ap -dp
./spectrumscale node list
./spectrumscale deploy --precheck
./spectrumscale deploy
Deploy Outcome:
2 Protocol nodes
Active SMB and NFS file protocols
Active Object protocol
cesSharedRoot file system created and used for protocol configuration and state data
ObjectFS file system created with an Object_Fileset created within
fs1 file system created and ready
Next Steps:
- Configure Authentication with mmuserauth or by configuring authentication with the Install Toolkit and re-running the deployment

Spectrum Scale installation

Example of adding protocols to an existing cluster
Pre-req Configuration
Decide on a file system to use for cesSharedRoot (>=4GB). Preferably, a standalone file system solely for this purpose.
Take note of the file system name and mount point. Verify the file system is mounted on all protocol nodes.
Decide which nodes will be the Protocol nodes
Set aside CES-IPs that are unused in the current cluster and network. Do not attempt to assign the CES-IPs to any adapters.
Verify each Protocol node has a pre-established network route and IP not only on the GPFS cluster network, but on the same network the CESIPs will belong to. When Protocols are deployed, the CES-IPs will be aliased to the active network device matching their subnet. The CES-IPs
must be free to move among nodes during failover cases.
Decide which protocols to enable. The protocol deployment will install all protocols but will enable only the ones you choose.
Add the new to-be protocol nodes to the existing cluster using mmaddnode (or use the Install Toolkit).
In this example, we will add the protocol functionality to nodes already within the cluster.
Install Toolkit commands (Toolkit is running on a node that will become a protocol node)
./spectrumscale setup -s 10.11.10.15 <- internal gpfs network IP on the current Installer node that can see all protocol nodes
./spectrumscale config populate -n cluster-node5 <- pick a node in the cluster for the toolkit to use for automatic configuration
./spectrumscale node add cluster-node5 -a -p
./spectrumscale node add cluster-node6 -p
./spectrumscale node add cluster-node7 -p
./spectrumscale node add cluster-node8 -p
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
./spectrumscale enable nfs
./spectrumscale enable smb
./spectrumscale enable object
./spectrumscale config object -e mycluster-ces
./spectrumscale config object -o Object_Fileset
./spectrumscale config object -f ObjectFS -m /ibm/ObjectFS
./spectrumscale config object -au admin -ap -dp
./spectrumscale callhome enable <- If you prefer not to enable callhome, change the enable to a disable
./spectrumscale callhome config -n COMPANY_NAME -i COMPANY_ID -cn MY_COUNTRY_CODE -e MY_EMAIL_ADDRESS
./spectrumscale node list
./spectrumscale deploy --precheck
./spectrumscale deploy
Deploy Outcome:
CES Protocol stack added to 4 nodes, now designated as Protocol nodes with server licenses
4 CES-IPs distributed among the protocol nodes
Protocol configuration and state data will use the cesSharedRoot file system
Object protocol will use the ObjectFS filesystem
Callhome will be configured

Spectrum Scale installation

Example of Upgrading protocol nodes / other nodes (not in an ESS)
Pre-Upgrade planning:
  * Refer to the Knowledge Center for supported upgrade paths of Spectrum Scale nodes
  * Consider whether OS, FW, or drivers on the protocol node(s) should be upgraded and plan this either before or after the install toolkit upgrade
  * SMB: requires quiescing all I/O for the duration of the upgrade. Due to the SMB clustering functionality, differing SMB levels cannot co-exist
within a cluster at the same time. This requires a full outage of SMB during the upgrade.
  * NFS: Recommended to quiesce all I/O for the duration of the upgrade. NFS experiences I/O pauses, and depending upon the client, mounts
may disconnect during the upgrade.
  * Object: Recommended to quiesce all I/O for the duration of the upgrade. Object service will be down or interrupted at multiple times during the
upgrade process. Clients may experience errors or they might be unable to connect during this time. They should retry as appropriate.
  * Performance Monitoring: Collector(s) may experience small durations in which no performance data is logged, as the nodes upgrade.

Install Toolkit commands:
  ./spectrumscale setup -s 10.11.10.11 -st ss <- internal gpfs network IP on the current Installer node that can see all protocol nodes
  ./spectrumscale config populate -N <hostname_of_any_node_in_cluster>

** If config populate is incompatible with your configuration, add the nodes and CES configuration to the install toolkit manually **
  ./spectrumscale node list <- This is the list of nodes the Install Toolkit will upgrade. Remove any non-CES nodes you would rather do manually
  ./spectrumscale upgrade precheck
  ./spectrumscale upgrade run
gpfs/install_scale_linux.txt · Last modified: 2022/05/11 23:53 by manu