This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
gpfs:gpfs_sizing [2025/11/17 10:06] manu [VM sizing for IBM Storage Scale] |
gpfs:gpfs_sizing [2025/11/17 11:04] (current) manu [Features in IBM Storage Scale editions] |
||
|---|---|---|---|
| Line 3: | Line 3: | ||
| ===== VM sizing for IBM Storage Scale ===== | ===== VM sizing for IBM Storage Scale ===== | ||
| - | https://community.ibm.com/community/user/blogs/ramesh-krishnamaneni/2025/09/26/optimizing-ibm-storage-scale-in-ibm-cloud-vsi-vs-b#:~:text=Assess%20Workload%20Requirements%3A%20Match%20infrastructure,alerting%20for%20health%20and%20performance. | ||
| Sizing Summary for a Small Cluster | Sizing Summary for a Small Cluster | ||
| Line 14: | Line 13: | ||
| vRAM: 16 GB | vRAM: 16 GB | ||
| Network: 2 x 25 Gbps vNICs (bonded for I/O) | Network: 2 x 25 Gbps vNICs (bonded for I/O) | ||
| + | </code> | ||
| + | * Per Protocol node: | ||
| + | <code> | ||
| + | vCPU: 1 vCPUs | ||
| + | vRAM: 64 GB or 128 GB if NFS ans SMB is used | ||
| + | Network: 10 Gbps | ||
| </code> | </code> | ||
| * Storage: Virtual disks backed by high-speed SAN LUNs (or local disks on the hypervisor host if using FPO). | * Storage: Virtual disks backed by high-speed SAN LUNs (or local disks on the hypervisor host if using FPO). | ||
| * Best Practice: Use physical mode Raw Device Mapping (RDM) or the virtualization platform's equivalent for best I/O performance and direct control over the LUNs from the Storage Scale VMs. | * Best Practice: Use physical mode Raw Device Mapping (RDM) or the virtualization platform's equivalent for best I/O performance and direct control over the LUNs from the Storage Scale VMs. | ||
| + | https://community.ibm.com/community/user/blogs/ramesh-krishnamaneni/2025/09/26/optimizing-ibm-storage-scale-in-ibm-cloud-vsi-vs-b | ||
| + | |||
| + | ===== Features in IBM Storage Scale editions ===== | ||
| + | |||
| + | ^ Feature ^ Data Access ^ Data Management1 ^ Erasure Code Edition ^ | ||
| + | | Multi-protocol scalable file service with simultaneous access to a common set of data | ✓ | ✓ | ✓ | | ||
| + | | Facilitate data access with a global namespace, massively scalable file system, quotas and snapshots, data integrity and availability, and filesets | ✓ | ✓ | ✓ | | ||
| + | | Simplify management with GUI | ✓ | ✓ | ✓ | | ||
| + | | Improved efficiency with QoS and compression | ✓ | ✓ | ✓ | | ||
| + | | Create optimized tiered storage pools based on performance, locality, or cost | ✓ | ✓ | ✓ | | ||
| + | | Simplify data management with Information Lifecycle Management (ILM) tools that include policy based data placement and migration | ✓ | ✓ | ✓ | | ||
| + | | Enable worldwide data access using AFM asynchronous replication | ✓ | ✓ | ✓ | | ||
| + | | Asynchronous multi-site Disaster Recovery | | ✓ | ✓ | | ||
| + | | Multi-site replication with AFM to cloud object storage | | ✓ | ✓ | | ||
| + | | Protect data with native software encryption and secure erase, NIST compliant and FIPS certified | | ✓ | ✓ | | ||
| + | | File audit logging | | ✓ | ✓ | | ||
| + | | Watch folder | | ✓ | ✓ | | ||
| + | | Erasure coding | ESS only | ESS only | ✓ | | ||
| + | |||
| + | https://www.ibm.com/docs/en/storage-scale/5.2.3?topic=overview-storage-scale-product-editions | ||
| ===== Filesystem Block size, best practice ===== | ===== Filesystem Block size, best practice ===== | ||