This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
storage:svc_bestpractice [2022/07/25 23:58] manu [Perf issue on SAN / Storage] |
storage:svc_bestpractice [2025/03/04 19:05] (current) manu [Linux optimization] |
||
---|---|---|---|
Line 1: | Line 1: | ||
===== SVC Best practice with IBM storage ===== | ===== SVC Best practice with IBM storage ===== | ||
+ | |||
+ | ===== SVC Enhanced Stretch Cluster ====== | ||
+ | |||
+ | For stretched systems that use the enhanced configuration functions, **storage systems that are configured to one of the main sites (1 or 2) need be zoned only to be visible by the nodes in that site**. Storage systems in site 3 or storage systems that have no site that is defined must be zoned to all nodes. | ||
+ | |||
+ | Note: ISLs must not be shared between private and public virtual fabrics. | ||
+ | |||
+ | Configure Enhanced Stretch Cluster | ||
+ | |||
+ | https://www.ibm.com/docs/en/spectrumvirtualsoftw/8.2.x?topic=details-configuring-enhanced-stretched-system | ||
+ | |||
+ | ===== Speed for mirroring ===== | ||
For information, for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s | For information, for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s | ||
+ | |||
+ | ==== SAN switches ==== | ||
+ | |||
+ | **buffer credit on SAN switches** | ||
+ | 40 for 16Gb or 32Gb | ||
+ | |||
+ | |||
+ | **workload segregation** | ||
+ | on storage dedicated ports for high speed traffic ex: 32Gb | ||
+ | and other ports for lower speed 8Gb / 16Gb using zoning | ||
+ | |||
==== FCM optimization ==== | ==== FCM optimization ==== | ||
Line 85: | Line 108: | ||
When using **compression on storage**, reduce block size on host to a value lower than **64k** | When using **compression on storage**, reduce block size on host to a value lower than **64k** | ||
+ | |||
+ | FIXME v8.4 and higher: Where possible limit the maximum transfer size sent to the IBM FlashSystem to | ||
+ | no more than **256 KiB**. This limitation is general best practice and not specific to only | ||
**Details**\\ | **Details**\\ | ||
Line 126: | Line 152: | ||
**Note:** You can make this change without rebooting the ESX/ESXi host or without putting the ESX/ESXi host in maintenance mode. | **Note:** You can make this change without rebooting the ESX/ESXi host or without putting the ESX/ESXi host in maintenance mode. | ||
+ | ==== Linux optimization ==== | ||
+ | |||
+ | FIXME v8.4 and higher: Where possible limit the maximum transfer size sent to the IBM FlashSystem to | ||
+ | no more than **256 KiB**. This limitation is general best practice and not specific to only | ||
+ | |||
+ | Using compression on SVC volumes, the best is to set IO size to 64k | ||
+ | |||
+ | The answer is to use UDEV. UDEV can ensure that all block devices connected to your VM, even if they are hot plugged, will get the same consistent maximum IO size applied. All you need to do is create a file “71-block-max-sectors.rules” under /etc/udev/rules.d/ with the following line. | ||
+ | <cli> | ||
+ | ACTION==”add|change”, SUBSYSTEM==”block”, RUN+=”/bin/sh -c ‘/bin/echo 64 > /sys%p/queue/max_sectors_kb’” | ||
+ | </cli> | ||
+ | |||
+ | If you don’t have UDEV in your distribution then the alternative is to use rc.local and essentially apply the same command, as an example, “echo 1024 > /sys/block/sda/queue/max_sectors_kb” | ||
+ | <cli> | ||
+ | [root@prscale-b-02 block]# cat /sys/block/sda/queue/max_sectors_kb | ||
+ | 1280 | ||
+ | [root@prscale-b-02 block]# echo "64" > /sys/block/sda/queue/max_sectors_kb | ||
+ | |||
+ | [root@prscale-b-02 rules.d]# cat /etc/udev/rules.d/80-persistent-diskio-ibm.rules | ||
+ | ACTION=="add|change", SUBSYSTEM=="block", ATTR{device/model}=="*", ATTR{queue/nr_requests}="256", ATTR{device/queue_depth}="32", ATTR{queue/max_sectors_kb}="64" | ||
+ | </cli> | ||
+ | |||
+ | Use the command : **udevadm info -a** | ||
===== Perf issue on SAN / Storage ===== | ===== Perf issue on SAN / Storage ===== | ||
- | [[storage:brocade_pb&s[]=buffer#buffer_credit_problem|Servers hardware / firmware]] | + | [[storage:brocade_pb#buffer_credit_problem|Brocade buffer credit]] |
- | http://emmanuel.iffly.free.fr/doku.php?id=storage:brocade_pb&s[]=buffer#buffer_credit_problem | + | |
+ | ===== CPU usage with DR pool ===== | ||
+ | |||
+ | When you use DR pool volume deletion will use high CPU (lower than 20% in current usage) you can easily reach 60% of CPU during deletion process. | ||
+ | |||
+ | To check deletion, there is only one way, but no progress status ! | ||
+ | <cli prompt='>'> | ||
+ | IBM_FlashSystem:V5100-02:superuser>lsvdiskcopy | grep del | ||
+ | 125 l0000-DB01 0 deleting yes no 0 Pool0 200.00GB striped no on balanced yes 0 Pool0 no yes | ||
+ | 148 l0000-tap04 0 deleting yes no 0 Pool0 200.00GB striped no on balanced yes 0 Pool0 no yes | ||
+ | 149 l0000-tap03 0 deleting yes no 0 Pool0 200.00GB striped no on balanced yes 0 Pool0 no yes | ||
+ | 150 l0000-tap01 0 deleting yes no 0 Pool0 200.00GB striped no on balanced yes 0 Pool0 no yes | ||
+ | 151 l0000-tap02 0 deleting yes no 0 Pool0 200.00GB striped no on balanced yes 0 Pool0 no yes | ||
+ | </cli> | ||
+ | |||
+ | Usually if the CPU is not above 90% it is not considered as high and it is not affecting the performance. |