User Tools

Site Tools


gpfs:gpfs_sizing

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gpfs:gpfs_sizing [2021/04/30 22:42]
manu
gpfs:gpfs_sizing [2025/11/17 11:04] (current)
manu [Features in IBM Storage Scale editions]
Line 1: Line 1:
 ====== Spectrum Scale sizing ====== ====== Spectrum Scale sizing ======
  
 +===== VM sizing for IBM Storage Scale =====
 +
 +
 +Sizing Summary for a Small Cluster
 +
 +A common starting point for a small, moderately performing virtual Storage Scale cluster might be:
 +  * Minimum Nodes: 3 (for quorum and basic replication)
 +  * Per NSD Server VM:
 +<​code>​
 +  vCPU: 8 vCPUs
 +  vRAM: 16 GB
 +  Network: 2 x 25 Gbps vNICs (bonded for I/O)
 +</​code>  ​
 +  * Per Protocol node:
 +<​code>​
 +  vCPU: 1 vCPUs
 +  vRAM: 64 GB or 128 GB if NFS ans SMB is used
 +  Network: 10 Gbps 
 +</​code>  ​
 +  * Storage: Virtual disks backed by high-speed SAN LUNs (or local disks on the hypervisor host if using FPO).
 +  * Best Practice: Use physical mode Raw Device Mapping (RDM) or the virtualization platform'​s equivalent for best I/O performance and direct control over the LUNs from the Storage Scale VMs.
 +
 +https://​community.ibm.com/​community/​user/​blogs/​ramesh-krishnamaneni/​2025/​09/​26/​optimizing-ibm-storage-scale-in-ibm-cloud-vsi-vs-b
 +
 +===== Features in IBM Storage Scale editions =====
 +
 +^ Feature ^ Data Access ^ Data Management1 ^ Erasure Code Edition ^
 +| Multi-protocol scalable file service with simultaneous access to a common set of data | ✓ | ✓ | ✓ |
 +| Facilitate data access with a global namespace, massively scalable file system, quotas and snapshots, data integrity and availability,​ and filesets | ✓ | ✓ | ✓ |
 +| Simplify management with GUI | ✓ | ✓ | ✓ |
 +| Improved efficiency with QoS and compression | ✓ | ✓ | ✓ |
 +| Create optimized tiered storage pools based on performance,​ locality, or cost | ✓ | ✓ | ✓ |
 +| Simplify data management with Information Lifecycle Management (ILM) tools that include policy based data placement and migration | ✓ | ✓ | ✓ |
 +| Enable worldwide data access using AFM asynchronous replication | ✓ | ✓ | ✓ |
 +| Asynchronous multi-site Disaster Recovery | | ✓ | ✓ |
 +| Multi-site replication with AFM to cloud object storage | | ✓ | ✓ |
 +| Protect data with native software encryption and secure erase, NIST compliant and FIPS certified | | ✓ | ✓ |
 +| File audit logging | | ✓ | ✓ |
 +| Watch folder | | ✓ | ✓ |
 +| Erasure coding | ESS only | ESS only | ✓ |
 +
 +https://​www.ibm.com/​docs/​en/​storage-scale/​5.2.3?​topic=overview-storage-scale-product-editions
 ===== Filesystem Block size, best practice ===== ===== Filesystem Block size, best practice =====
 +
 +Typically, metadata is between 1 and 5% of the filesystem space, but this can vary. 
  
 Depending on usage, you can have different **block size** Depending on usage, you can have different **block size**
Line 53: Line 97:
 During NSD creation alternate node position for access to NSD. During NSD creation alternate node position for access to NSD.
 If the first node is used as first node in NSD definition, then it 'll be only use and you'll reach performance problems If the first node is used as first node in NSD definition, then it 'll be only use and you'll reach performance problems
-I have a question regarding your NSD configuration below. Starting on NSD '​data_nsd043'​ up to NSD '​data_nsd106' ​the primary ​NSD server role rotated. + 
-Then you stopped to rotate primary server at NSD '​data_nsd107'​ and configured '​gpfs1'​ as your primary server. Which means every NSD task ( read/ write ... ) has to be handled by NSD server '​gpfs1'​ as long as he is reachable. +If only the first node is selected as first into NSD definition ​means every NSD task ( read/ write ... ) has to be handled by NSD server '​gpfs1'​ as long as he is reachable. 
-Such a configuration could cause a overload situation on the affected server. ​So my recommendation is to change it and to rotate it in the same manner like did up to NSD '​data_nsd106'​! +Such a configuration could cause a overload situation on the affected server. ​ 
-The NSD server sequence can be adjusted via command ​'mmchnsd' ​( see below ):+ 
 +The NSD server sequence can be adjusted ​online ​via command ​**mmchnsd** ( see below ):
 https://​www.ibm.com/​docs/​en/​spectrum-scale/​5.0.5?​topic=disks-changing-your-nsd-configuration https://​www.ibm.com/​docs/​en/​spectrum-scale/​5.0.5?​topic=disks-changing-your-nsd-configuration
-<cli>+Ex: 
 +<​cli ​prompt='#'​> 
 +# mmchnsd "​data_nsd043:​gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo"​ 
 +</​cli>​ 
 + 
 +Maybe easier with a description file
  
 +<cli prompt='#'>​
 +# mmlsnsd -X
 File system Disk name NSD volume ID NSD servers File system Disk name NSD volume ID NSD servers
 ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------
gpfs/gpfs_sizing.1619815351.txt.gz · Last modified: 2021/04/30 22:42 by manu