User Tools

Site Tools


gpfs:gpfs_sizing

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gpfs:gpfs_sizing [2021/09/30 18:13]
manu
gpfs:gpfs_sizing [2025/11/17 11:04] (current)
manu [Features in IBM Storage Scale editions]
Line 1: Line 1:
 ====== Spectrum Scale sizing ====== ====== Spectrum Scale sizing ======
  
 +===== VM sizing for IBM Storage Scale =====
 +
 +
 +Sizing Summary for a Small Cluster
 +
 +A common starting point for a small, moderately performing virtual Storage Scale cluster might be:
 +  * Minimum Nodes: 3 (for quorum and basic replication)
 +  * Per NSD Server VM:
 +<​code>​
 +  vCPU: 8 vCPUs
 +  vRAM: 16 GB
 +  Network: 2 x 25 Gbps vNICs (bonded for I/O)
 +</​code>  ​
 +  * Per Protocol node:
 +<​code>​
 +  vCPU: 1 vCPUs
 +  vRAM: 64 GB or 128 GB if NFS ans SMB is used
 +  Network: 10 Gbps 
 +</​code>  ​
 +  * Storage: Virtual disks backed by high-speed SAN LUNs (or local disks on the hypervisor host if using FPO).
 +  * Best Practice: Use physical mode Raw Device Mapping (RDM) or the virtualization platform'​s equivalent for best I/O performance and direct control over the LUNs from the Storage Scale VMs.
 +
 +https://​community.ibm.com/​community/​user/​blogs/​ramesh-krishnamaneni/​2025/​09/​26/​optimizing-ibm-storage-scale-in-ibm-cloud-vsi-vs-b
 +
 +===== Features in IBM Storage Scale editions =====
 +
 +^ Feature ^ Data Access ^ Data Management1 ^ Erasure Code Edition ^
 +| Multi-protocol scalable file service with simultaneous access to a common set of data | ✓ | ✓ | ✓ |
 +| Facilitate data access with a global namespace, massively scalable file system, quotas and snapshots, data integrity and availability,​ and filesets | ✓ | ✓ | ✓ |
 +| Simplify management with GUI | ✓ | ✓ | ✓ |
 +| Improved efficiency with QoS and compression | ✓ | ✓ | ✓ |
 +| Create optimized tiered storage pools based on performance,​ locality, or cost | ✓ | ✓ | ✓ |
 +| Simplify data management with Information Lifecycle Management (ILM) tools that include policy based data placement and migration | ✓ | ✓ | ✓ |
 +| Enable worldwide data access using AFM asynchronous replication | ✓ | ✓ | ✓ |
 +| Asynchronous multi-site Disaster Recovery | | ✓ | ✓ |
 +| Multi-site replication with AFM to cloud object storage | | ✓ | ✓ |
 +| Protect data with native software encryption and secure erase, NIST compliant and FIPS certified | | ✓ | ✓ |
 +| File audit logging | | ✓ | ✓ |
 +| Watch folder | | ✓ | ✓ |
 +| Erasure coding | ESS only | ESS only | ✓ |
 +
 +https://​www.ibm.com/​docs/​en/​storage-scale/​5.2.3?​topic=overview-storage-scale-product-editions
 ===== Filesystem Block size, best practice ===== ===== Filesystem Block size, best practice =====
  
Line 62: Line 104:
 https://​www.ibm.com/​docs/​en/​spectrum-scale/​5.0.5?​topic=disks-changing-your-nsd-configuration https://​www.ibm.com/​docs/​en/​spectrum-scale/​5.0.5?​topic=disks-changing-your-nsd-configuration
 Ex: Ex:
-<​cli>​ +<​cli ​prompt='#'​
-# mmchnsd "​data_nsd043 ​C0A80017543D01BC:​gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo"​+# mmchnsd "​data_nsd043:​gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo"​
 </​cli>​ </​cli>​
  
-<cli>+Maybe easier with a description file 
 + 
 +<​cli ​prompt='#'​>
 # mmlsnsd -X # mmlsnsd -X
 File system Disk name NSD volume ID NSD servers File system Disk name NSD volume ID NSD servers
gpfs/gpfs_sizing.1633018399.txt.gz · Last modified: 2021/09/30 18:13 by manu