User Tools

Site Tools


gpfs:gpfs_sizing

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
gpfs:gpfs_sizing [2021/01/01 21:24]
127.0.0.1 external edit
gpfs:gpfs_sizing [2025/11/17 11:04] (current)
manu [Features in IBM Storage Scale editions]
Line 1: Line 1:
 ====== Spectrum Scale sizing ====== ====== Spectrum Scale sizing ======
  
 +===== VM sizing for IBM Storage Scale =====
 +
 +
 +Sizing Summary for a Small Cluster
 +
 +A common starting point for a small, moderately performing virtual Storage Scale cluster might be:
 +  * Minimum Nodes: 3 (for quorum and basic replication)
 +  * Per NSD Server VM:
 +<​code>​
 +  vCPU: 8 vCPUs
 +  vRAM: 16 GB
 +  Network: 2 x 25 Gbps vNICs (bonded for I/O)
 +</​code>  ​
 +  * Per Protocol node:
 +<​code>​
 +  vCPU: 1 vCPUs
 +  vRAM: 64 GB or 128 GB if NFS ans SMB is used
 +  Network: 10 Gbps 
 +</​code>  ​
 +  * Storage: Virtual disks backed by high-speed SAN LUNs (or local disks on the hypervisor host if using FPO).
 +  * Best Practice: Use physical mode Raw Device Mapping (RDM) or the virtualization platform'​s equivalent for best I/O performance and direct control over the LUNs from the Storage Scale VMs.
 +
 +https://​community.ibm.com/​community/​user/​blogs/​ramesh-krishnamaneni/​2025/​09/​26/​optimizing-ibm-storage-scale-in-ibm-cloud-vsi-vs-b
 +
 +===== Features in IBM Storage Scale editions =====
 +
 +^ Feature ^ Data Access ^ Data Management1 ^ Erasure Code Edition ^
 +| Multi-protocol scalable file service with simultaneous access to a common set of data | ✓ | ✓ | ✓ |
 +| Facilitate data access with a global namespace, massively scalable file system, quotas and snapshots, data integrity and availability,​ and filesets | ✓ | ✓ | ✓ |
 +| Simplify management with GUI | ✓ | ✓ | ✓ |
 +| Improved efficiency with QoS and compression | ✓ | ✓ | ✓ |
 +| Create optimized tiered storage pools based on performance,​ locality, or cost | ✓ | ✓ | ✓ |
 +| Simplify data management with Information Lifecycle Management (ILM) tools that include policy based data placement and migration | ✓ | ✓ | ✓ |
 +| Enable worldwide data access using AFM asynchronous replication | ✓ | ✓ | ✓ |
 +| Asynchronous multi-site Disaster Recovery | | ✓ | ✓ |
 +| Multi-site replication with AFM to cloud object storage | | ✓ | ✓ |
 +| Protect data with native software encryption and secure erase, NIST compliant and FIPS certified | | ✓ | ✓ |
 +| File audit logging | | ✓ | ✓ |
 +| Watch folder | | ✓ | ✓ |
 +| Erasure coding | ESS only | ESS only | ✓ |
 +
 +https://​www.ibm.com/​docs/​en/​storage-scale/​5.2.3?​topic=overview-storage-scale-product-editions
 ===== Filesystem Block size, best practice ===== ===== Filesystem Block size, best practice =====
 +
 +Typically, metadata is between 1 and 5% of the filesystem space, but this can vary. 
  
 Depending on usage, you can have different **block size** Depending on usage, you can have different **block size**
Line 47: Line 91:
 |Enterprise File (Misc Projects, data sharing)| Other Storage|256KiB metadata and data| |Enterprise File (Misc Projects, data sharing)| Other Storage|256KiB metadata and data|
  
 +===== Customization =====
 +
 +==== NSD access ====
 +
 +During NSD creation alternate node position for access to NSD.
 +If the first node is used as first node in NSD definition, then it 'll be only use and you'll reach performance problems
 +
 +If only the first node is selected as first into NSD definition means every NSD task ( read/ write ... ) has to be handled by NSD server '​gpfs1'​ as long as he is reachable.
 +Such a configuration could cause a overload situation on the affected server. ​
 +
 +The NSD server sequence can be adjusted online via command **mmchnsd** ( see below ):
 +https://​www.ibm.com/​docs/​en/​spectrum-scale/​5.0.5?​topic=disks-changing-your-nsd-configuration
 +Ex:
 +<cli prompt='#'>​
 +# mmchnsd "​data_nsd043:​gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo"​
 +</​cli>​
 +
 +Maybe easier with a description file
 +
 +<cli prompt='#'>​
 +# mmlsnsd -X
 +File system Disk name NSD volume ID NSD servers
 +------------------------------------------------------------------------------------------------
 +cases data_nsd043 C0A80017543D01BC gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo
 +cases data_nsd044 C0A80018543CE5A2 gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo,​gpfs03.gpfsint.labo
 +cases data_nsd045 C0A80017543D01C3 gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo,​gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo
 +cases data_nsd046 C0A80018543CE5A8 gpfs02.gpfsint.labo,​gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo
 +cases data_nsd047 C0A80017543D01C9 gpfs03.gpfsint.labo,​gpfs04.gpfsint.labo,​gpfs01.gpfsint.labo,​gpfs02.gpfsint.labo
 +</​cli> ​
gpfs/gpfs_sizing.1609532696.txt.gz · Last modified: 2021/01/01 21:24 by 127.0.0.1