User Tools

Site Tools


storage:svc_bestpractice

SVC Best practice with IBM storage

For information, for volume mirroring, the max speed is sync rate 100% corresponding to 64MB/s

FCM optimization

change stats interval

Change perf collect interval, interval of 5 minutes give a better view.

V5200 > startstats -interval 5

change IO rate

From the provided data the average write response time for vdisk is below 2,5 ms . Vdisk id 45 with highest average response times shows as well high write data rate and io rate for a specific time frame leading to a higher write response time on node level too.

I would suggest to implement a throttle for the volume to limit the IO a bit and see if this helps.

n regards to your question for data throughput limit the answer is:that compressed volumes works best with throughput below 350MB/s. The data below shows that for V5200-2 you see a bit higher response times as the throughout is about 400 MB/s. If you want to achieve a bit better response time you may want to throttle the throughput to less than 350MB. The command is described here.

https://www.ibm.com/docs/en/flashsystem-5x00/8.3.1?topic=csc-mkthrottle-3

Example :

V5200 > mkthrottle -type vdisk -bandwidth 300 -vdisk 49

Enable unmap

Disable the host unmap feature, and enable backend unmap (hostunmap can affcet performances)

V5200 > chsystem -hostunmap off
V5200 > chsystem -backendunmap on

The host unmap feature in Spectrum Virtualize allows host filesystems to inform the storage system that a region of storage is no longer required and can be cleaned up to make space for new data to be stored. It is highly desirable when using data reduction (FCMs or Data Reduction Pools) to avoid running out of space.

Move LUN/volume from one pool to another

Best way is to addvdiskcopy, which is a parallel process and can be slow down this sync rate parameter, while movevdisk is much more slower whith 1 or 2 vdisk movement at same time

Same command as movevdisk:

SVC > addvdiskcopy -mdiskgrp mdisk_group_name vdisk_name -autodelete yes 

SVC and V7000

sg247521 - IBM System Storage SAN Volume Controller Best Practices and Performance Guidelines.pdf (www.redbooks.ibm.com) p61

When you plan to attach a V7000 on the SAN Volume Controller, create the arrays (MDisks) manually (by using a CLI), instead of using the V7000 settings. Select one disk drive per enclosure. When possible, ensure that each enclosure that is selected is part of the same chain. When you define V7000 internal storage, create a 1-to-1 relationship. That is, create one storage pool to one MDisk (array) to one volume. Then, map the volume to the SAN Volume Controller host.

Important:The extent size value for SAN Volume Controller should be 1 GB. The extent size value for the V7000 should be 256 MB. These settings stop potential negation of stripe on stripe. For more information, see the redbook (previously mentioned) and the blog post “Configuring IBM Storwize V7000 and SVC for Optimal Performance” at:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/storagevirtualization/entry/configuring_ibm_storwize_v7000_and_svc_for_optimal_performance_part_121?lang=en

Best practice on host is to use 4 paths with a maximum of 8 paths per volume (LUN). Depending on read/write block, you will have a decrease of IOPS from 0 to 10% when you goes from 4 to 8 paths !

you need at least 4 arrays per V7000 to get maximal CPU core performance.

In other words, if you only run one array, you will get 1/4 of the performance of the system. Optimal configurations require at least 4 arrays, to ensure maximal core usage. Since arrays can be accessed through both nodes in the V7000 system, 4 is enough, on each node, one array will be assigned and processed through one core. http://rogerluethy.wordpress.com/2012/04/20/configuring-ibm-storwize-v7000-and-svc-for-optimal-performance-part-1/

SVC and DS3500 DS5100 DS5300 ...

The cluster has detected that an IBM DS series disk controller's configuration is not supported by the cluster. The disk controller is operating in RDAC mode. The disk controller might appear to be operating with the cluster; however, the configuration is unsupported because it is known to not work with the cluster. User response

  Using the IBM DS series console, ensure that the **host type** is set to **IBM TS SAN VC'** and that the AVT option is enabled. (The AVT and RDAC options are mutually exclusive).
  Mark the error that you have just repaired as "fixed". If the problem has not been fixed it will be logged again; this could take a few minutes.
  Go to repair verification MAP.

VMware optimization

When using compression on storage, reduce block size on host to a value lower than 64k

Details
To increase storage device performance in ESX reduce the size of I/O requests passed to the storage device to tune and optimize storage performance for ESX/ESXi.

Solution
Many applications are designed to issue large I/O requests for higher bandwidth. ESX/ESXi 3.5 and later versions support increased limits for the maximum I/O request size passed to storage devices. These versions of ESX pass I/O requests as large as 32767 KB directly to the storage device. I/O requests larger than this are split into several, smaller-sized I/O requests. Some storage devices, however, have been found to exhibit reduced performance when passed large I/O requests (above 128KB, 256KB, or 512KB, depending on the array and configuration). As a fix for this, you can lower the maximum I/O size ESX allows before splitting I/O requests.

If you have measured a decreased storage performance in ESX/ESXi 3.5 and later hosts, compared to a similar ESX 3.0.x system, try reducing the maximum I/O size, as described below, and see if performance improves. If your storage device does not have this problem (or if the problem does not go away when you reduce the maximum I/O size), you are better off leaving the maximum I/O size at its default 32767 KB setting because it increases performance and (or) lower CPU utilization on your system.

One way to diagnose the problem is by looking at latency statistics reported by esxtop. Beginning in ESX/ESXi 3.5, esxtop includes several detailed storage statistics that report time spent in various components. If storage devices are a problem, esxtop displays high device latencies. For more information about using esxtop, see Using esxtop to identify storage performance issues (1008205).

Running command on the host containing the virtual machine reports the default disk IO block size:

# esxcli system settings advanced list -o "/Disk/DiskMaxIOSize" 
Path: /Disk/DiskMaxIOSize
Type: integer
Int Value: 32767
Default Int Value: 32767
Min Value: 32
Max Value: 32767
String Value:
Default String Value:
Valid Characters:
Description: Max Disk READ/WRITE I/O size before splitting (in KB)

Here as you can see default block size is 32MB, we have to change it to 64kB

To reduce the size of I/O requests passed to the storage device using the VMware Infrastructure/vSphere Client:

    Go to Host > Configuration. 

    Click Advanced Settings.

    Go to Disk.

    Change Disk.DiskMaxIOSize to 64

Note: You can make this change without rebooting the ESX/ESXi host or without putting the ESX/ESXi host in maintenance mode.

Linux optimization

Using compression on SVC volumes, the best is to set IO size to 64k

The answer is to use UDEV. UDEV can ensure that all block devices connected to your VM, even if they are hot plugged, will get the same consistent maximum IO size applied. All you need to do is create a file “71-block-max-sectors.rules” under /etc/udev/rules.d/ with the following line.

ACTION==”add|change”, SUBSYSTEM==”block”, RUN+=”/bin/sh -c ‘/bin/echo 64 > /sys%p/queue/max_sectors_kb’”

If you don’t have UDEV in your distribution then the alternative is to use rc.local and essentially apply the same command, as an example, “echo 1024 > /sys/block/sda/queue/max_sectors_kb”

[root@prscale-b-02 block]# cat /sys/block/sda/queue/max_sectors_kb
1280
[root@prscale-b-02 block]# echo "64" > /sys/block/sda/queue/max_sectors_kb

[root@prscale-b-02 rules.d]# cat /etc/udev/rules.d/80-persistent-diskio-ibm.rules
ACTION=="add|change", SUBSYSTEM=="block", ATTR{device/model}=="*", ATTR{queue/nr_requests}="256", ATTR{device/queue_depth}="32", ATTR{queue/max_sectors_kb}="64"

Perf issue on SAN / Storage

CPU usage with DR pool

When you use DR pool volume deletion will use high CPU (lower than 20% in current usage) you can easily reach 60% of CPU during deletion process.

To check deletion, there is only one way, but no progress status !

IBM_FlashSystem:V5100-02:superuser>lsvdiskcopy | grep del
125      l0000-DB01        0       deleting yes  no      0            Pool0          200.00GB striped no      on        balanced         yes             0                   Pool0                 no      yes
148      l0000-tap04       0       deleting yes  no      0            Pool0          200.00GB striped no      on        balanced         yes             0                   Pool0                 no      yes
149      l0000-tap03       0       deleting yes  no      0            Pool0          200.00GB striped no      on        balanced         yes             0                   Pool0                 no      yes
150      l0000-tap01       0       deleting yes  no      0            Pool0          200.00GB striped no      on        balanced         yes             0                   Pool0                 no      yes
151      l0000-tap02       0       deleting yes  no      0            Pool0          200.00GB striped no      on        balanced         yes             0                   Pool0                 no      yes

Usually if the CPU is not above 90% it is not considered as high and it is not affecting the performance.

storage/svc_bestpractice.txt · Last modified: 2022/08/16 11:23 by manu