This is an old revision of the document!
Creation of a file system or fileset or path for a CES shared root, and creation of an object fileset The installation toolkit uses a shared root storage area to install the protocols on each node. This storage is also used by NFS and object protocols to maintain system data associated with the cluster integration we provide. This storage can be a subdirectory in an existing GPFS file system or it can be a filesystem on its own. Once this option is set, changing it will requires a restart of GPFS.
1. Create a file system or fileset for shared root. **Size must be at least 4 GB.** 2. Use the following command: --ccr-enable
mmchconfig cesSharedRoot=path_to_the_filesystem/fileset_created_in_step_1
For Object, the installation toolkit creates an independent fileset in the GPFS file system that you name.
[root@gpfs01 ~]# mkdir /gpfs01/.cesSharedRoot [root@gpfs01 ~]# ls -lsa /gpfs01 4 drwxr-xr-x 2 root root 4096 12 juin 14:56 .cesSharedRoot 1 dr-xr-xr-x 2 root root 8192 1 janv. 1970 .snapshots [root@gpfs01 ~]# mmchconfig cesSharedRoot=/gpfs01/.cesSharedRoot [root@gpfs01 ~]# mmlsconfig Configuration data for cluster gpfs01.cluster: ---------------------------------------------- clusterName gpfs01.cluster clusterId 17066707964194168573 autoload no uidDomain GPFS dmapiFileHandleSize 32 minReleaseLevel 5.0.0.0 tiebreakerDisks GPFS_NSD_DATA01 cesSharedRoot /gpfs01/.cesSharedRoot adminMode central File systems in cluster gpfs01.cluster: --------------------------------------- /dev/gpfs01lv [root@gpfs01 ~]# mmlscluster GPFS cluster information ======================== GPFS cluster name: gpfs01.cluster GPFS cluster id: 17066707964194168573 GPFS UID domain: GPFS Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp Repository type: server-based GPFS cluster configuration servers: ----------------------------------- Primary server: gpfs01 Secondary server: (none) Node Daemon node name IP address Admin node name Designation ------------------------------------------------------------------- 1 gpfs01 10.10.105.10 gpfs01 quorum-manager [root@gpfs01 ~]# mmchcluster --ccr-enable [root@gpfs01 ~]# mmlscluster | grep Repo Repository type: CCR
[root@gpfs01 ~]# yum -y install gpfs.smb nfs-utils nfs-ganesha-gpfs nfs-ganesha [root@gpfs01 ~]# systemctl mask nfs-server.service Created symlink from /etc/systemd/system/nfs-server.service to /dev/null. [root@gpfs01 ~]# systemctl stop nfs
Enable CES for nodes
[root@gpfs01 ~]# mmchnode --ces-enable -N gpfs01,gpfs02 Fri Sep 30 17:12:30 CEST 2016: mmchnode: Processing node gpfs01 Fri Sep 30 17:12:50 CEST 2016: mmchnode: Processing node gpfs02 mmchnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
[root@gpfs01 ~]# mmlscluster GPFS cluster information ======================== GPFS cluster name: gpfs_test.rhlabh1 GPFS cluster id: 9668046452208786064 GPFS UID domain: gpfs_test.rhlabh1 Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp Repository type: CCR Node Daemon node name IP address Admin node name Designation --------------------------------------------------------------------- 1 gpfs01 10.10.10.103 gpfs01 quorum-manager-perfmon 2 gpfs02 10.10.10.104 gpfs02 quorum-manager-perfmon
[root@gpfs01 ~]# mmces service enable NFS [root@gpfs01 ~]# mmces service enable SMB [root@gpfs01 ~]# mmlscluster --ces GPFS cluster information ======================== GPFS cluster name: gpfs_test.rhlabh1 GPFS cluster id: 9668046452208786064 Cluster Export Services global parameters ----------------------------------------- Shared root directory: /gpfs1 Enabled Services: NFS SMB Log level: 0 Address distribution policy: even-coverage Node Daemon node name IP address CES IP address list ----------------------------------------------------------------------- 1 gpfs01 10.10.10.103 None 2 gpfs02 10.10.10.104 None [root@gpfs01 ~]# mmces service list --all Enabled services: NFS SMB gpfs01: NFS is running, SMB is running gpfs02: NFS is running, SMB is running
mmces service start SMB -a mmces service start NFS -a
After you start the protocol services, verify that they are running by issuing the
[root@gpfs01 ~]# mmces state show -a NODE AUTH BLOCK NETWORK AUTH_OBJ NFS OBJ SMB CES gpfs01 DISABLED DISABLED HEALTHY DISABLED HEALTHY DISABLED HEALTHY HEALTHY gpfs02 DISABLED DISABLED HEALTHY DISABLED HEALTHY DISABLED HEALTHY HEALTHY
Add IP address for cluster NFS and CIFS
[root@gpfs01 ~]# mmces address add --ces-ip gpfs01-nfs [root@gpfs01 ~]# mmces address add --ces-ip gpfs02-cifs
[root@gpfs01 ~]# mmuserauth service list FILE access not configured PARAMETERS VALUES ------------------------------------------------- OBJECT access not configured PARAMETERS VALUES ------------------------------------------------- [root@gpfs01 ~]# mmuserauth service create --data-access-method file --type userdefined File authentication configuration completed successfully. [root@gpfs01 ~]# mmuserauth service list FILE access configuration : USERDEFINED PARAMETERS VALUES ------------------------------------------------- OBJECT access not configured PARAMETERS VALUES -------------------------------------------------
[root@gpfs01 ~]# mmnfs export add '/gpfs01/backupdb' -c '10.1.0.0/16(Access_Type=RW,squash=root_squash,protocols=3:4)'
[root@gpfs01 ~]# yum -y install gpfs.gss.pmcollector gpfs.gss.pmsensors gpfs.pm-ganesha [root@gpfs01 ~]# systemctl enable pmsensors.service [root@gpfs01 ~]# systemctl start pmsensors.service [root@gpfs01 ~]# systemctl enable pmcollector.service [root@gpfs01 ~]# systemctl start pmcollector.service
Now configure the PM_SENSORS for performance monitoring
[root@gpfs01 ~]# mmperfmon config generate --collectors gpfs01-hb,gpfs02-hb mmperfmon: Node gpfs01-hb is not a perfmon node. mmperfmon: Node gpfs02-hb is not a perfmon node. mmperfmon: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
Test it
[root@gpfs01 ~]# /usr/lpp/mmfs/gui/cli/runtask PM_SENSORS --debug debug: locale=en_US debug: Running 'mmperfmon config show ' on node localhost debug: Reading output of 'mmperfmon config show' debug: Parsed data for 48 sensors debug: syncDb(): new/changed/unchanged/deleted 0/48/0/0 debug: Running 'mmsysmonc event 'gui' 'gui_refresh_task_successful' ' on node localhost EFSSG1000I The command completed successfully.
Show the config
[root@gpfs01 ~]# mmperfmon config show # This file has been generated automatically and SHOULD NOT # be edited manually. It may be overwritten at any point # in time. cephMon = "/opt/IBM/zimon/CephMonProxy" cephRados = "/opt/IBM/zimon/CephRadosProxy" colCandidates = "gpfs01-hb", "gpfs02-hb" colRedundancy = 1 collectors = { host = "" port = "4739" } config = "/opt/IBM/zimon/ZIMonSensors.cfg" ctdbstat = "" daemonize = T hostname = "" ipfixinterface = "0.0.0.0" logfile = "/var/log/zimon/ZIMonSensors.log" loglevel = "info" mmcmd = "/opt/IBM/zimon/MMCmdProxy" mmdfcmd = "/opt/IBM/zimon/MMDFProxy" mmpmon = "/opt/IBM/zimon/MmpmonSockProxy" piddir = "/var/run" release = "5.0.1-1" sensors = { name = "CPU" period = 1 }, { name = "Load" period = 1 }, ...
[root@gpfs01 ~]# yum -y install postgres postgres-libs postgres-server [root@gpfs01 ~]# yum -y install gpfs.gui gpfs.java [root@gpfs01 ~]# systemctl enable gpfsgui [root@gpfs01 ~]# systemctl start gpfsgui
Now you are ready to use https://gpfs01/
user: admin / admin001
NFS export file is located:
[root@gpfs01 ~]# cat /var/mmfs/ces/nfs-config/gpfs.ganesha.exports.conf
Show export options
[root@gpfs01 ~]# mmnfs export list -Y mmcesnfslsexport:nfsexports:HEADER:version:reserved:reserved:Path:Delegations:Clients: mmcesnfslsexport:nfsexports:0:1:::/gpfs01:NONE:*: mmcesnfslsexport:nfsexports:0:1:::/gpfs01/backupdb:NONE:10.0.105.0/24:
Remove a share
[root@gpfs01 ~]# mmnfs export remove '/gpfs01'
List NFS config
[root@gpfs01 ~]# mmnfs config list NFS Ganesha Configuration: ========================== NFS_PROTOCOLS: 3,4 NFS_PORT: 2049 MNT_PORT: 0 NLM_PORT: 0 RQUOTA_PORT: 0 NB_WORKER: 256 LEASE_LIFETIME: 60 GRACE_PERIOD: 60 DOMAINNAME: VIRTUAL1.COM DELEGATIONS: Disabled ========================== STATD Configuration ========================== STATD_PORT: 0 ========================== CacheInode Configuration ========================== ENTRIES_HWMARK: 1500000 ========================== Export Defaults ========================== ACCESS_TYPE: NONE PROTOCOLS: 3,4 TRANSPORTS: TCP ANONYMOUS_UID: -2 ANONYMOUS_GID: -2 SECTYPE: SYS PRIVILEGEDPORT: FALSE MANAGE_GIDS: FALSE SQUASH: ROOT_SQUASH NFS_COMMIT: FALSE ========================== Log Configuration ========================== LOG_LEVEL: EVENT ========================== Idmapd Configuration ========================== LOCAL-REALMS: localdomain DOMAIN: localdomain ==========================
SMB config:
[root@gpfs01 ~]# mmsmb config list SMB option value add share command /usr/lpp/mmfs/bin/mmcesmmccrexport aio read size 1 aio write size 1 aio_pthread:aio open yes auth methods guest sam winbind change notify yes change share command /usr/lpp/mmfs/bin/mmcesmmcchexport client NTLMv2 auth yes ctdb locktime warn threshold 5000 ctdb:smbxsrv_open_global.tdb false debug hires timestamp yes delete share command /usr/lpp/mmfs/bin/mmcesmmcdelexport dfree cache time 100 disable netbios yes disable spoolss yes dmapi support no durable handles no ea support yes fileid:algorithm fsname fileid:fstype allow gpfs force unknown acl user yes fruit:metadata stream fruit:nfs_aces no fruit:veto_appledouble no gencache:stabilize_count 10000 gpfs:dfreequota yes gpfs:hsm yes gpfs:leases yes gpfs:merge_writeappend no gpfs:prealloc yes gpfs:sharemodes yes gpfs:winattr yes groupdb:backend tdb host msdfs yes idmap config * : backend autorid idmap config * : range 10000000-299999999 idmap config * : rangesize 1000000 idmap config * : read only no idmap:cache no include system krb5 conf no kernel oplocks no large readwrite yes level2 oplocks yes log level 1 log writeable files on exit yes logging syslog@0 file mangled names illegal map archive yes map hidden yes map readonly yes map system yes max log size 100000 max open files 20000 nfs4:acedup merge nfs4:chown yes nfs4:mode simple notify:inotify yes passdb backend tdbsam password server * posix locking no preferred master no printcap cache time 0 read only no readdir_attr:aapl_max_access false security user server max protocol SMB3_02 server min protocol SMB2_02 server string IBM NAS shadow:fixinodes yes shadow:snapdir .snapshots shadow:snapdirseverywhere yes shadow:sort desc smbd exit on ip drop yes smbd profiling level on smbd:async search ask sharemode yes smbd:backgroundqueue False socket options TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 store dos attributes yes strict allocate yes strict locking auto syncops:onmeta no tdbsam:map builtin no time_audit:timeout 5000 unix extensions no use sendfile no vfs objects shadow_copy2 syncops gpfs fileid time_audit wide links no winbind max clients 10000 winbind max domain connections 5 winbind:online check timeout 30
SMB export list:
[root@gpfs01 ~]# mmsmb export list export path browseable guest ok smb encrypt samba /gpfs01/samba yes no auto