User Tools

Site Tools


storage:svc_bestpractice

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
storage:svc_bestpractice [2021/07/19 10:30]
manu
storage:svc_bestpractice [2025/03/04 19:05] (current)
manu [Linux optimization]
Line 1: Line 1:
 ===== SVC Best practice with IBM storage ===== ===== SVC Best practice with IBM storage =====
 +
 +===== SVC Enhanced Stretch Cluster ======
 +
 +For stretched systems that use the enhanced configuration functions, **storage systems that are configured to one of the main sites (1 or 2) need be zoned only to be visible by the nodes in that site**. Storage systems in site 3 or storage systems that have no site that is defined must be zoned to all nodes.
 +
 +Note: ISLs must not be shared between private and public virtual fabrics.
 +
 +Configure Enhanced Stretch Cluster
 +
 +https://​www.ibm.com/​docs/​en/​spectrumvirtualsoftw/​8.2.x?​topic=details-configuring-enhanced-stretched-system
 +
 +===== Speed for mirroring =====
  
 For information,​ for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s For information,​ for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s
 +
 +==== SAN switches ====
 +
 +**buffer credit on SAN switches**
 +  40 for 16Gb or 32Gb
 +
 +
 +**workload segregation**
 +  on storage dedicated ports for high speed traffic ex: 32Gb
 +  and other ports for lower speed 8Gb / 16Gb using zoning
 +
  
 ==== FCM optimization ==== ==== FCM optimization ====
Line 33: Line 56:
 === Enable unmap === === Enable unmap ===
  
-Enable ​the host unmap feature, ​using either the chsystem command line. Note: For systems running 8.2.1 or higher+Disable ​the host unmap feature, ​and enable backend unmap (hostunmap can affcet performances)
 <cli prompt='>'>​ <cli prompt='>'>​
-V5200 > chsystem -hostunmap on+V5200 > chsystem -hostunmap ​off 
 +V5200 > chsystem -backendunmap ​on
 </​cli>​ </​cli>​
 The host unmap feature in Spectrum Virtualize allows host filesystems to inform the storage system that a region of storage is no longer required and can be cleaned up to make space for new data to be stored. The host unmap feature in Spectrum Virtualize allows host filesystems to inform the storage system that a region of storage is no longer required and can be cleaned up to make space for new data to be stored.
Line 81: Line 105:
  
  
 +==== VMware optimization ====
 +
 +When using **compression on storage**, reduce block size on host to a value lower than **64k**
 +
 +FIXME v8.4 and higher: Where possible limit the maximum transfer size sent to the IBM FlashSystem to 
 +no more than **256 KiB**. This limitation is general best practice and not specific to only 
 +
 +**Details**\\
 +To increase storage device performance in ESX reduce the size of I/O requests passed to the storage device to tune and optimize storage performance for ESX/ESXi.
 +
 +**Solution**\\
 +Many applications are designed to issue large I/O requests for higher bandwidth. ESX/ESXi 3.5 and later versions support increased limits for the maximum I/O request size passed to storage devices. These versions of ESX pass I/O requests as large as 32767 KB directly to the storage device. I/O requests larger than this are split into several, smaller-sized I/O requests. Some storage devices, however, have been found to exhibit reduced performance when passed large I/O requests (above 128KB, 256KB, or 512KB, depending on the array and configuration). As a fix for this, you can lower the maximum I/O size ESX allows before splitting I/O requests.
 +
 +If you have measured a decreased storage performance in ESX/ESXi 3.5 and later hosts, compared to a similar ESX 3.0.x system, try reducing the maximum I/O size, as described below, and see if performance improves. If your storage device does not have this problem (or if the problem does not go away when you reduce the maximum I/O size), you are better off leaving the maximum I/O size at its default 32767 KB setting because it increases performance and (or) lower CPU utilization on your system.
 +
 +One way to diagnose the problem is by looking at latency statistics reported by esxtop. Beginning in ESX/ESXi 3.5, esxtop includes several detailed storage statistics that report time spent in various components. If storage devices are a problem, esxtop displays high device latencies. For more information about using esxtop, see Using esxtop to identify storage performance issues (1008205).
 +
 +Running command on the host containing the virtual machine reports the default disk IO block size:
 +<cli prompt='#'>​
 +# esxcli system settings advanced list -o "/​Disk/​DiskMaxIOSize" ​
 +Path: /​Disk/​DiskMaxIOSize
 +Type: integer
 +Int Value: 32767
 +Default Int Value: 32767
 +Min Value: 32
 +Max Value: 32767
 +String Value:
 +Default String Value:
 +Valid Characters:
 +Description:​ Max Disk READ/WRITE I/O size before splitting (in KB)
 +</​cli>​
 +
 +Here as you can see default block size is 32MB, we have to change it to 64kB
 +
 +To reduce the size of I/O requests passed to the storage device using the VMware Infrastructure/​vSphere Client:
 +<cli>
 +    Go to Host > Configuration. ​
 +
 +    Click Advanced Settings.
 +
 +    Go to Disk.
 +
 +    Change Disk.DiskMaxIOSize to 64
 +</​cli>​
 +
 +**Note:** You can make this change without rebooting the ESX/ESXi host or without putting the ESX/ESXi host in maintenance mode.
 +
 +==== Linux optimization ====
 +
 +FIXME v8.4 and higher: Where possible limit the maximum transfer size sent to the IBM FlashSystem to 
 +no more than **256 KiB**. This limitation is general best practice and not specific to only 
 +
 +Using compression on SVC volumes, the best is to set IO size to 64k
 +
 +The answer is to use UDEV. UDEV can ensure that all block devices connected to your VM, even if they are hot plugged, will get the same consistent maximum IO size applied. All you need to do is create a file “71-block-max-sectors.rules” under /​etc/​udev/​rules.d/​ with the following line.
 +<cli>
 +ACTION==”add|change”,​ SUBSYSTEM==”block”,​ RUN+=”/​bin/​sh -c ‘/​bin/​echo 64 > /​sys%p/​queue/​max_sectors_kb’”
 +</​cli>​
 +
 +If you don’t have UDEV in your distribution then the alternative is to use rc.local and essentially apply the same command, as an example, “echo 1024 > /​sys/​block/​sda/​queue/​max_sectors_kb”
 +<cli>
 +[root@prscale-b-02 block]# cat /​sys/​block/​sda/​queue/​max_sectors_kb
 +1280
 +[root@prscale-b-02 block]# echo "​64"​ > /​sys/​block/​sda/​queue/​max_sectors_kb
 +
 +[root@prscale-b-02 rules.d]# cat /​etc/​udev/​rules.d/​80-persistent-diskio-ibm.rules
 +ACTION=="​add|change",​ SUBSYSTEM=="​block",​ ATTR{device/​model}=="​*",​ ATTR{queue/​nr_requests}="​256",​ ATTR{device/​queue_depth}="​32",​ ATTR{queue/​max_sectors_kb}="​64"​
 +</​cli>​
 +
 +Use the command : **udevadm info -a**
 +===== Perf issue on SAN / Storage =====
 +
 +[[storage:​brocade_pb#​buffer_credit_problem|Brocade buffer credit]] ​  
 +
 +===== CPU usage with DR pool =====
 +
 +When you use DR pool volume deletion will use high CPU (lower than 20% in current usage) you can easily reach 60% of CPU during deletion process.
 +
 +To check deletion, there is only one way, but no progress status !
 +<cli prompt='>'>​
 +IBM_FlashSystem:​V5100-02:​superuser>​lsvdiskcopy | grep del
 +125      l0000-DB01 ​       0       ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +148      l0000-tap04 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +149      l0000-tap03 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +150      l0000-tap01 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +151      l0000-tap02 ​      ​0 ​      ​deleting yes  no      0            Pool0          200.00GB striped no      on        balanced ​        ​yes ​            ​0 ​                  ​Pool0 ​                ​no ​     yes
 +</​cli>​
 +
 +Usually if the CPU is not above 90% it is not considered as high and it is not affecting the performance.
storage/svc_bestpractice.1626683446.txt.gz · Last modified: 2021/07/19 10:30 by manu