This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
aix:aix_tuning_param [2023/03/17 00:12] manu |
aix:aix_tuning_param [2024/09/04 22:47] (current) manu |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== AIX tuning performance ====== | ====== AIX tuning performance ====== | ||
| + | |||
| + | ===== Network tuning ===== | ||
| + | |||
| + | https://www.ibm.com/support/pages/node/7038687 | ||
| + | |||
| + | https://www.ibm.com/support/pages/node/6410510 | ||
| + | |||
| + | For LPARs | ||
| + | <cli> | ||
| + | no -p -o tcp_sendspace=1048576 | ||
| + | no -p -o tcp_recvspace=1048576 | ||
| + | no -p -o sb_max=2097152 | ||
| + | no -p -o rfc1323=1 | ||
| + | no -p -o tcp_nodelayack=0 | ||
| + | no -p -o sack=1 | ||
| + | chdev -l enX -a mtu_bypass=on | ||
| + | no -p -o tcp_init_window=10 | ||
| + | no -p -o rfc2414=1 | ||
| + | no -p -o tcp_cubic=1 | ||
| + | </cli> | ||
| ===== VIOS tuning ===== | ===== VIOS tuning ===== | ||
| Line 7: | Line 27: | ||
| http://gibsonnet.net/blog/dwarchive/New%20FC%20adapter%20port%20queue%20depth%20tuning%20(Chris%27s%20AIX%20Blog).html | http://gibsonnet.net/blog/dwarchive/New%20FC%20adapter%20port%20queue%20depth%20tuning%20(Chris%27s%20AIX%20Blog).html | ||
| + | INFO | ||
| + | On each parameter that can be changed, you can check available values: | ||
| + | lsattr -l fcs0 -a max_npivs | ||
| + | | ||
| ==== disk ==== | ==== disk ==== | ||
| + | |||
| + | === queue_depth === | ||
| + | |||
| + | For the logical drive (which is known as the hdisk in AIX), the setting is the attribute | ||
| + | queue_depth, as shown in the following example: | ||
| + | # chdev -l hdiskX -a queue_depth=Y -P | ||
| + | In this example, X is the hdisk number, and Y is the value to which you are setting X for | ||
| + | queue_depth. | ||
| + | |||
| + | For a high-volume transaction workload of small random transfers, try a queue_depth value of | ||
| + | 25 or more. For large sequential workloads, performance is better with small queue depths, such as a value of 4 | ||
| ==== FC adapter ==== | ==== FC adapter ==== | ||
| Line 17: | Line 52: | ||
| Change max_npivs | Change max_npivs | ||
| <cli prompt='#'> | <cli prompt='#'> | ||
| - | # chdev -l fcs0 -a max_npivs=128-P | + | # chdev -l fcs0 -a max_npivs=128 -P |
| fcs0 changed | fcs0 changed | ||
| # exit | # exit | ||
| Line 33: | Line 68: | ||
| </cli> | </cli> | ||
| - | **adapters:** | + | //adapters:// |
| * 16Gbps or less FC adapters: value=64 | * 16Gbps or less FC adapters: value=64 | ||
| * 32Gbps FC adapters: value=255 | * 32Gbps FC adapters: value=255 | ||
| + | |||
| + | === max_xfer_size === | ||
| + | |||
| + | FIXME set first the xfer size on VIOS, which have to be higher or equal than LPAR vFC attribute, else the LPAR would be unable to boot | ||
| + | |||
| + | With the default value of max_xfer_size=0x100000, the area is 16 MB, and for other allowable values of the max_xfer_size attribute, the memory area is 128 MB. | ||
| === num_cmd_elements === | === num_cmd_elements === | ||
| Line 76: | Line 117: | ||
| === The lg_term_dma attribute === | === The lg_term_dma attribute === | ||
| + | |||
| + | For high throughput sequential I/O environments, use the start values lg_term_dma=0x400000 or 0x800000 (depending on the adapter type) and max_xfr_size=0x200000 | ||
| + | |||
| + | The AIX settings that can directly affect throughput performance with large I/O block size are | ||
| + | the lg_term_dma and max_xfer_size parameters for the fcs device | ||
| The lg_term_dma AIX Fibre Channel adapter attribute controls the direct memory access | The lg_term_dma AIX Fibre Channel adapter attribute controls the direct memory access | ||
| Line 97: | Line 143: | ||
| ===== LPAR client ===== | ===== LPAR client ===== | ||
| + | |||
| + | ==== queue_depth ==== | ||
| + | |||
| + | vSCSI adapters have a fixed queue depth of **512** command elements per adapter. | ||
| + | |||
| + | Depending on the disk queue_depth, you can calculate the maximum LUN per adapter | ||
| + | Maximum disks per adapter = ( 512 – 2 ) / ( 3 + queue depth ) | ||
| + | |||
| + | Ex: | ||
| + | A queue depth of 32 allows 510 / 35 = 14 LUNs per adapter. | ||
| + | ==== VIOS vs LPAR client ==== | ||
| On which VIOS is connected a virtual FC adapter | On which VIOS is connected a virtual FC adapter | ||
| Line 140: | Line 197: | ||
| secure : yes | secure : yes | ||
| </cli> | </cli> | ||
| + | |||
| + | ===== Test script ===== | ||
| + | |||
| + | ==== for disk ==== | ||
| + | |||
| + | https://ftp.software.ibm.com/aix/tools/perftools/perfpmr/perf72/ | ||
| + | |||
| + | https://www.ibm.com/support/pages/stress-test-your-aix-or-linux-server-nstress | ||
| + | |||
| + | <code> | ||
| + | cd /usr1 | ||
| + | for f in 0 1 2 3 4 5 6 7 8 9 | ||
| + | do | ||
| + | echo "Creating file: f${f}" | ||
| + | dd if=/dev/zero of=f${f} bs=1m count=1024 >/dev/null 2>&1 | ||
| + | done | ||
| + | |||
| + | Run(){ | ||
| + | ndisk64 -f f1 -C -r 100 -R -b 1m -t 20 -M 4 |grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f1 -C -R -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f1 -C -S -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f1 -C -R -r 0 -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f1 -C -S -r 0 -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -R -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -R -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -S -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -S -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -R -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -R -r 0 -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -S -r 0 -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' | ||
| + | } | ||
| + | Run | ||
| + | </code> | ||
| ===== LPM between P7 and P8 ===== | ===== LPM between P7 and P8 ===== | ||