User Tools

Site Tools


storage:svc_bestpractice

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
storage:svc_bestpractice [2022/07/25 11:37]
manu [VMware optimization]
storage:svc_bestpractice [2025/03/04 19:05] (current)
manu [Linux optimization]
Line 1: Line 1:
 ===== SVC Best practice with IBM storage ===== ===== SVC Best practice with IBM storage =====
 +
 +===== SVC Enhanced Stretch Cluster ======
 +
 +For stretched systems that use the enhanced configuration functions, **storage systems that are configured to one of the main sites (1 or 2) need be zoned only to be visible by the nodes in that site**. Storage systems in site 3 or storage systems that have no site that is defined must be zoned to all nodes.
 +
 +Note: ISLs must not be shared between private and public virtual fabrics.
 +
 +Configure Enhanced Stretch Cluster
 +
 +https://​www.ibm.com/​docs/​en/​spectrumvirtualsoftw/​8.2.x?​topic=details-configuring-enhanced-stretched-system
 +
 +===== Speed for mirroring =====
  
 For information,​ for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s For information,​ for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s
 +
 +==== SAN switches ====
 +
 +**buffer credit on SAN switches**
 +  40 for 16Gb or 32Gb
 +
 +
 +**workload segregation**
 +  on storage dedicated ports for high speed traffic ex: 32Gb
 +  and other ports for lower speed 8Gb / 16Gb using zoning
 +
  
 ==== FCM optimization ==== ==== FCM optimization ====
Line 36: Line 59:
 <cli prompt='>'>​ <cli prompt='>'>​
 V5200 > chsystem -hostunmap off V5200 > chsystem -hostunmap off
-V5200 > chsystem -backendunmap ​off+V5200 > chsystem -backendunmap ​on
 </​cli>​ </​cli>​
 The host unmap feature in Spectrum Virtualize allows host filesystems to inform the storage system that a region of storage is no longer required and can be cleaned up to make space for new data to be stored. The host unmap feature in Spectrum Virtualize allows host filesystems to inform the storage system that a region of storage is no longer required and can be cleaned up to make space for new data to be stored.
Line 84: Line 107:
 ==== VMware optimization ==== ==== VMware optimization ====
  
-When using compression on storage, reduce ​bloc size to a value lower than 64k+When using **compression on storage**, reduce ​block size on host to a value lower than **64k** 
 + 
 +FIXME v8.4 and higher: Where possible limit the maximum transfer size sent to the IBM FlashSystem to  
 +no more than **256 KiB**. This limitation is general best practice and not specific to only 
  
 **Details**\\ **Details**\\
Line 95: Line 121:
  
 One way to diagnose the problem is by looking at latency statistics reported by esxtop. Beginning in ESX/ESXi 3.5, esxtop includes several detailed storage statistics that report time spent in various components. If storage devices are a problem, esxtop displays high device latencies. For more information about using esxtop, see Using esxtop to identify storage performance issues (1008205). One way to diagnose the problem is by looking at latency statistics reported by esxtop. Beginning in ESX/ESXi 3.5, esxtop includes several detailed storage statistics that report time spent in various components. If storage devices are a problem, esxtop displays high device latencies. For more information about using esxtop, see Using esxtop to identify storage performance issues (1008205).
- 
-To reduce the size of I/O requests passed to the storage device using the VMware Infrastructure/​vSphere Client: 
-<cli> 
-    Go to Host > Configuration. ​ 
- 
-    Click Advanced Settings. 
- 
-    Go to Disk. 
- 
-    Change Disk.DiskMaxIOSize. 
-</​cli>​ 
- 
-**Note:** You can make this change without rebooting the ESX/ESXi host or without putting the ESX/ESXi host in maintenance mode. 
  
 Running command on the host containing the virtual machine reports the default disk IO block size: Running command on the host containing the virtual machine reports the default disk IO block size:
Line 123: Line 136:
 Description:​ Max Disk READ/WRITE I/O size before splitting (in KB) Description:​ Max Disk READ/WRITE I/O size before splitting (in KB)
 </​cli>​ </​cli>​
 +
 +Here as you can see default block size is 32MB, we have to change it to 64kB
 +
 +To reduce the size of I/O requests passed to the storage device using the VMware Infrastructure/​vSphere Client:
 +<cli>
 +    Go to Host > Configuration. ​
 +
 +    Click Advanced Settings.
 +
 +    Go to Disk.
 +
 +    Change Disk.DiskMaxIOSize to 64
 +</​cli>​
 +
 +**Note:** You can make this change without rebooting the ESX/ESXi host or without putting the ESX/ESXi host in maintenance mode.
 +
 +==== Linux optimization ====
 +
 +FIXME v8.4 and higher: Where possible limit the maximum transfer size sent to the IBM FlashSystem to 
 +no more than **256 KiB**. This limitation is general best practice and not specific to only 
 +
 +Using compression on SVC volumes, the best is to set IO size to 64k
 +
 +The answer is to use UDEV. UDEV can ensure that all block devices connected to your VM, even if they are hot plugged, will get the same consistent maximum IO size applied. All you need to do is create a file “71-block-max-sectors.rules” under /​etc/​udev/​rules.d/​ with the following line.
 +<cli>
 +ACTION==”add|change”,​ SUBSYSTEM==”block”,​ RUN+=”/​bin/​sh -c ‘/​bin/​echo 64 > /​sys%p/​queue/​max_sectors_kb’”
 +</​cli>​
 +
 +If you don’t have UDEV in your distribution then the alternative is to use rc.local and essentially apply the same command, as an example, “echo 1024 > /​sys/​block/​sda/​queue/​max_sectors_kb”
 +<cli>
 +[root@prscale-b-02 block]# cat /​sys/​block/​sda/​queue/​max_sectors_kb
 +1280
 +[root@prscale-b-02 block]# echo "​64"​ > /​sys/​block/​sda/​queue/​max_sectors_kb
 +
 +[root@prscale-b-02 rules.d]# cat /​etc/​udev/​rules.d/​80-persistent-diskio-ibm.rules
 +ACTION=="​add|change",​ SUBSYSTEM=="​block",​ ATTR{device/​model}=="​*",​ ATTR{queue/​nr_requests}="​256",​ ATTR{device/​queue_depth}="​32",​ ATTR{queue/​max_sectors_kb}="​64"​
 +</​cli>​
 +
 +Use the command : **udevadm info -a**
 +===== Perf issue on SAN / Storage =====
 +
 +[[storage:​brocade_pb#​buffer_credit_problem|Brocade buffer credit]] ​  
 +
 +===== CPU usage with DR pool =====
 +
 +When you use DR pool volume deletion will use high CPU (lower than 20% in current usage) you can easily reach 60% of CPU during deletion process.
 +
 +To check deletion, there is only one way, but no progress status !
 +<cli prompt='>'>​
 +IBM_FlashSystem:​V5100-02:​superuser>​lsvdiskcopy | grep del
 +125      l0000-DB01 ​       0       ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +148      l0000-tap04 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +149      l0000-tap03 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +150      l0000-tap01 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +151      l0000-tap02 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +</​cli>​
 +
 +Usually if the CPU is not above 90% it is not considered as high and it is not affecting the performance.
storage/svc_bestpractice.1658741822.txt.gz · Last modified: 2022/07/25 11:37 by manu