This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
aix:aix_tuning_param [2023/03/16 23:45] manu |
aix:aix_tuning_param [2024/09/04 22:47] (current) manu |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== AIX tuning performance ====== | ====== AIX tuning performance ====== | ||
+ | |||
+ | ===== Network tuning ===== | ||
+ | |||
+ | https://www.ibm.com/support/pages/node/7038687 | ||
+ | |||
+ | https://www.ibm.com/support/pages/node/6410510 | ||
+ | |||
+ | For LPARs | ||
+ | <cli> | ||
+ | no -p -o tcp_sendspace=1048576 | ||
+ | no -p -o tcp_recvspace=1048576 | ||
+ | no -p -o sb_max=2097152 | ||
+ | no -p -o rfc1323=1 | ||
+ | no -p -o tcp_nodelayack=0 | ||
+ | no -p -o sack=1 | ||
+ | chdev -l enX -a mtu_bypass=on | ||
+ | no -p -o tcp_init_window=10 | ||
+ | no -p -o rfc2414=1 | ||
+ | no -p -o tcp_cubic=1 | ||
+ | </cli> | ||
===== VIOS tuning ===== | ===== VIOS tuning ===== | ||
Line 7: | Line 27: | ||
http://gibsonnet.net/blog/dwarchive/New%20FC%20adapter%20port%20queue%20depth%20tuning%20(Chris%27s%20AIX%20Blog).html | http://gibsonnet.net/blog/dwarchive/New%20FC%20adapter%20port%20queue%20depth%20tuning%20(Chris%27s%20AIX%20Blog).html | ||
+ | INFO | ||
+ | On each parameter that can be changed, you can check available values: | ||
+ | lsattr -l fcs0 -a max_npivs | ||
+ | | ||
==== disk ==== | ==== disk ==== | ||
+ | |||
+ | === queue_depth === | ||
+ | |||
+ | For the logical drive (which is known as the hdisk in AIX), the setting is the attribute | ||
+ | queue_depth, as shown in the following example: | ||
+ | # chdev -l hdiskX -a queue_depth=Y -P | ||
+ | In this example, X is the hdisk number, and Y is the value to which you are setting X for | ||
+ | queue_depth. | ||
+ | |||
+ | For a high-volume transaction workload of small random transfers, try a queue_depth value of | ||
+ | 25 or more. For large sequential workloads, performance is better with small queue depths, such as a value of 4 | ||
==== FC adapter ==== | ==== FC adapter ==== | ||
+ | |||
+ | === max_npivs === | ||
+ | |||
+ | If you reach limit in NPIV connections on physical VIOS ports, you can increase the maximum | ||
+ | |||
+ | Change max_npivs | ||
+ | <cli prompt='#'> | ||
+ | # chdev -l fcs0 -a max_npivs=128 -P | ||
+ | fcs0 changed | ||
+ | # exit | ||
+ | </cli> | ||
+ | <cli prompt='$'> | ||
+ | $ shutdown -restart | ||
+ | </cli> | ||
+ | |||
+ | After reboot: | ||
+ | <cli prompt='$'> | ||
+ | $ lsnports | ||
+ | name physloc fabric tports aports swwpns awwpns | ||
+ | fcs0 U2C4E.001.DBJ8765-P2-C5-T1 1 128 127 2048 2045 | ||
+ | fcs1 U2C4E.001.DBJ8765-P2-C5-T2 0 64 64 2048 2048 | ||
+ | </cli> | ||
+ | |||
+ | //adapters:// | ||
+ | * 16Gbps or less FC adapters: value=64 | ||
+ | * 32Gbps FC adapters: value=255 | ||
+ | |||
+ | === max_xfer_size === | ||
+ | |||
+ | FIXME set first the xfer size on VIOS, which have to be higher or equal than LPAR vFC attribute, else the LPAR would be unable to boot | ||
+ | |||
+ | With the default value of max_xfer_size=0x100000, the area is 16 MB, and for other allowable values of the max_xfer_size attribute, the memory area is 128 MB. | ||
+ | |||
+ | === num_cmd_elements === | ||
Check values, defined and runing | Check values, defined and runing | ||
Line 48: | Line 117: | ||
=== The lg_term_dma attribute === | === The lg_term_dma attribute === | ||
+ | |||
+ | For high throughput sequential I/O environments, use the start values lg_term_dma=0x400000 or 0x800000 (depending on the adapter type) and max_xfr_size=0x200000 | ||
+ | |||
+ | The AIX settings that can directly affect throughput performance with large I/O block size are | ||
+ | the lg_term_dma and max_xfer_size parameters for the fcs device | ||
The lg_term_dma AIX Fibre Channel adapter attribute controls the direct memory access | The lg_term_dma AIX Fibre Channel adapter attribute controls the direct memory access | ||
Line 67: | Line 141: | ||
transfer by the adapter. With the default value of max_xfer_size=0x100000, the area is 16 MB, | transfer by the adapter. With the default value of max_xfer_size=0x100000, the area is 16 MB, | ||
and for other allowable values of the max_xfer_size attribute, the memory area is 128 MB. | and for other allowable values of the max_xfer_size attribute, the memory area is 128 MB. | ||
+ | |||
+ | ===== LPAR client ===== | ||
+ | |||
+ | ==== queue_depth ==== | ||
+ | |||
+ | vSCSI adapters have a fixed queue depth of **512** command elements per adapter. | ||
+ | |||
+ | Depending on the disk queue_depth, you can calculate the maximum LUN per adapter | ||
+ | Maximum disks per adapter = ( 512 – 2 ) / ( 3 + queue depth ) | ||
+ | |||
+ | Ex: | ||
+ | A queue depth of 32 allows 510 / 35 = 14 LUNs per adapter. | ||
+ | ==== VIOS vs LPAR client ==== | ||
+ | |||
+ | On which VIOS is connected a virtual FC adapter | ||
+ | <cli prompt='#'> | ||
+ | root@aix01 /proc/sys/adapter/fc/fcs0 # cat hostinfo | ||
+ | VFC client adapter name : fcs0 | ||
+ | Host partition name (VIOS) : vio2 | ||
+ | VFC host adapter name : vfchost2 | ||
+ | VFC host adapter location code : U9009.22A.7891CA0-V3-C16 | ||
+ | FC adapter name on VIOS : fcs1 | ||
+ | FC adapter location code on VIOS : U78D3.001.WZS0AJN-P1-C8-T1 | ||
+ | </cli> | ||
+ | |||
+ | or use KDB | ||
+ | |||
+ | **VSCSI** | ||
+ | <cli prompt='#'> | ||
+ | # echo cvai | kdb -script | ||
+ | read vscsi_scsi_ptrs OK, ptr = 0xF10009D5B0129E98 | ||
+ | (0)> cvai | ||
+ | Executing cvai command | ||
+ | NAME STATE CMDS_ACTIVE ACTIVE_QUEUE HOST | ||
+ | vscsi0 0x000007 0x000000001A 0x0 vios2->vhost3 | ||
+ | </cli> | ||
+ | |||
+ | **VFCS** | ||
+ | <cli prompt='#'> | ||
+ | # echo vfcs | kdb -script | ||
+ | read vscsi_scsi_ptrs OK, ptr = 0xF10009D5B0129E98 | ||
+ | (0)> vfcs | ||
+ | Executing vfcs command | ||
+ | NAME ADDRESS STATE OPENED CH HOST_ADAP PHYS HOST | ||
+ | fcs0 0xF1000B01C0084000 0x0010 0x0001 8 Secure Secure Secure | ||
+ | fcs1 0xF1000B01C0088000 0x0010 0x0001 0 vfchost0 vios1 | ||
+ | </cli> | ||
+ | |||
+ | To prevent collecting these info on VIOS 3.1.3 or later | ||
+ | <cli prompt='$'> | ||
+ | $ chdev -dev viosnpiv0 -attr secure_va_info=yes | ||
+ | viosnpiv0 changed | ||
+ | |||
+ | $ virtadapinfo -list | ||
+ | secure : yes | ||
+ | </cli> | ||
+ | |||
+ | ===== Test script ===== | ||
+ | |||
+ | ==== for disk ==== | ||
+ | |||
+ | https://ftp.software.ibm.com/aix/tools/perftools/perfpmr/perf72/ | ||
+ | |||
+ | https://www.ibm.com/support/pages/stress-test-your-aix-or-linux-server-nstress | ||
+ | |||
+ | <code> | ||
+ | cd /usr1 | ||
+ | for f in 0 1 2 3 4 5 6 7 8 9 | ||
+ | do | ||
+ | echo "Creating file: f${f}" | ||
+ | dd if=/dev/zero of=f${f} bs=1m count=1024 >/dev/null 2>&1 | ||
+ | done | ||
+ | |||
+ | Run(){ | ||
+ | ndisk64 -f f1 -C -r 100 -R -b 1m -t 20 -M 4 |grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f1 -C -R -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f1 -C -S -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f1 -C -R -r 0 -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f1 -C -S -r 0 -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -R -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -R -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -S -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -S -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -R -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -R -r 0 -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -S -r 0 -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
+ | } | ||
+ | Run | ||
+ | </code> | ||
===== LPM between P7 and P8 ===== | ===== LPM between P7 and P8 ===== |