https://www.ibm.com/support/pages/node/7038687
https://www.ibm.com/support/pages/node/6410510
For LPARs
no -p -o tcp_sendspace=1048576 no -p -o tcp_recvspace=1048576 no -p -o sb_max=2097152 no -p -o rfc1323=1 no -p -o tcp_nodelayack=0 no -p -o sack=1 chdev -l enX -a mtu_bypass=on no -p -o tcp_init_window=10 no -p -o rfc2414=1 no -p -o tcp_cubic=1
http://capacityreports.com/VIO/VIOTuning/vio_tuning_server.html
http://gibsonnet.net/blog/dwarchive/New%20FC%20adapter%20port%20queue%20depth%20tuning%20(Chris%27s%20AIX%20Blog).html
INFO
On each parameter that can be changed, you can check available values: lsattr -l fcs0 -a max_npivs
For the logical drive (which is known as the hdisk in AIX), the setting is the attribute queue_depth, as shown in the following example:
# chdev -l hdiskX -a queue_depth=Y -P
In this example, X is the hdisk number, and Y is the value to which you are setting X for queue_depth.
For a high-volume transaction workload of small random transfers, try a queue_depth value of 25 or more. For large sequential workloads, performance is better with small queue depths, such as a value of 4
If you reach limit in NPIV connections on physical VIOS ports, you can increase the maximum
Change max_npivs
# chdev -l fcs0 -a max_npivs=128 -P fcs0 changed # exit
$ shutdown -restart
After reboot:
$ lsnports name physloc fabric tports aports swwpns awwpns fcs0 U2C4E.001.DBJ8765-P2-C5-T1 1 128 127 2048 2045 fcs1 U2C4E.001.DBJ8765-P2-C5-T2 0 64 64 2048 2048
adapters:
set first the xfer size on VIOS, which have to be higher or equal than LPAR vFC attribute, else the LPAR would be unable to boot
With the default value of max_xfer_size=0x100000, the area is 16 MB, and for other allowable values of the max_xfer_size attribute, the memory area is 128 MB.
Check values, defined and runing
# chdev -l fcs0 -a num_cmd_elems=1024 -P fcs0 changed # lsattr -El fcs0 -a num_cmd_elems num_cmd_elems 1024 Maximum Number of COMMAND Elements True << ODM # lsattr -Pl fcs0 -a num_cmd_elems num_cmd_elems 400 Maximum Number of COMMAND Elements True << Running
or AIX 7.3
# grep num_cmd /proc/sys/adapter/fc/fcs0/tunables num_cmd_elems ( Requested / Granted ): 1024 / 1024
or
# genkex | grep fscsi f10009d5a01e5000 f8000 /usr/lib/drivers/emfscsidd # oslevel -s 7200-05-04-2220 # echo emfscsi -d fscsi0 | kdb -script | grep num_cmd int num_cmd_elems = 0x17C # printf "%d\n" 0x17C 380 # lsattr -El fcs0 -a num_cmd_elems num_cmd_elems 400 Maximum Number of COMMAND Elements True
For high throughput sequential I/O environments, use the start values lg_term_dma=0x400000 or 0x800000 (depending on the adapter type) and max_xfr_size=0x200000
The AIX settings that can directly affect throughput performance with large I/O block size are the lg_term_dma and max_xfer_size parameters for the fcs device
The lg_term_dma AIX Fibre Channel adapter attribute controls the direct memory access (DMA) memory resource that an adapter driver can use. The default value of lg_term_dma is 0x200000, and the maximum value is 0x8000000. One change is to increase the value of lg_term_dma to 0x400000. If you still experience poor I/O performance after changing the value to 0x400000, you can increase the value of this attribute again. If you have a dual-port Fibre Channel adapter, the maximum value of the lg_term_dma attribute is divided between the two adapter ports. Therefore, never increase the value of the lg_term_dma attribute to the maximum value for a dual-port Fibre Channel adapter because this value causes the configuration of the second adapter port to fail.
The max_xfer_size AIX Fibre Channel adapter attribute controls the maximum transfer size of the Fibre Channel adapter. Its default value is 100,000, and the maximum value is 1,000,000. You can increase this attribute to improve performance. Setting the max_xfer_size attribute affects the size of the memory area that is used for data transfer by the adapter. With the default value of max_xfer_size=0x100000, the area is 16 MB, and for other allowable values of the max_xfer_size attribute, the memory area is 128 MB.
vSCSI adapters have a fixed queue depth of 512 command elements per adapter.
Depending on the disk queue_depth, you can calculate the maximum LUN per adapter
Maximum disks per adapter = ( 512 – 2 ) / ( 3 + queue depth )
Ex:
A queue depth of 32 allows 510 / 35 = 14 LUNs per adapter.
On which VIOS is connected a virtual FC adapter
root@aix01 /proc/sys/adapter/fc/fcs0 # cat hostinfo VFC client adapter name : fcs0 Host partition name (VIOS) : vio2 VFC host adapter name : vfchost2 VFC host adapter location code : U9009.22A.7891CA0-V3-C16 FC adapter name on VIOS : fcs1 FC adapter location code on VIOS : U78D3.001.WZS0AJN-P1-C8-T1
or use KDB
VSCSI
# echo cvai | kdb -script read vscsi_scsi_ptrs OK, ptr = 0xF10009D5B0129E98 (0)> cvai Executing cvai command NAME STATE CMDS_ACTIVE ACTIVE_QUEUE HOST vscsi0 0x000007 0x000000001A 0x0 vios2->vhost3
VFCS
# echo vfcs | kdb -script read vscsi_scsi_ptrs OK, ptr = 0xF10009D5B0129E98 (0)> vfcs Executing vfcs command NAME ADDRESS STATE OPENED CH HOST_ADAP PHYS HOST fcs0 0xF1000B01C0084000 0x0010 0x0001 8 Secure Secure Secure fcs1 0xF1000B01C0088000 0x0010 0x0001 0 vfchost0 vios1
To prevent collecting these info on VIOS 3.1.3 or later
$ chdev -dev viosnpiv0 -attr secure_va_info=yes viosnpiv0 changed $ virtadapinfo -list secure : yes
https://ftp.software.ibm.com/aix/tools/perftools/perfpmr/perf72/
https://www.ibm.com/support/pages/stress-test-your-aix-or-linux-server-nstress
cd /usr1 for f in 0 1 2 3 4 5 6 7 8 9 do echo "Creating file: f${f}" dd if=/dev/zero of=f${f} bs=1m count=1024 >/dev/null 2>&1 done Run(){ ndisk64 -f f1 -C -r 100 -R -b 1m -t 20 -M 4 |grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f1 -C -R -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f1 -C -S -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f1 -C -R -r 0 -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f1 -C -S -r 0 -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -R -b 1m -t 20 -M 4|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -R -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -r 100 -S -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -S -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -R -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -R -r 0 -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' ndisk64 -f f0,f1,f2,f3,f4,f5,f6,f7,f8,f9 -C -S -r 0 -b 1m -t 20 -M 10|grep TOTALS|awk '{print $2,$3,$5}' } Run
Only valid on P9 environment !! Remember to deactivate if lpar is pushed to a P8 server !!
A kernel scheduler tunable “vpm_fold_threshold” can be used to reduce physical CPU consumption by influencing the threshold at witch AIX unfolds or folds virtual cpu (VP) The default value is 49%. This helps P7 and P8 cpu architecture to privilege VP unfolding prior to use all threads in smt8 mode. As the P9 architecture is more optimized for multi-thread execution, it can handle larger vp fold thresholds with less single-thread performance impact.
Due to it's nature, the benefit of rising this tunable is mostly visible on lpars that have a large number of VP. This means: VP > 3
In dev environment and less real-time sensible production environments we therefore use a setting of 80% =⇒ vpm_fold_threshold = 80
In real-time sensible production, use a setting of 65-70**.
The schedo command is used to display or set the kernel scheduler tunables.
Command syntax:
[root@aix01]/root# schedo Usage: schedo -h [tunable] | {[-F] -L [tunable]} | {[-F] -x [tunable]} schedo [-p|-r] (-a [-F] | {-o tunable}) schedo [-p|-r] [-y] (-D | ({-d tunable} {-o tunable=value}))
View:
[root@aix01]/root# schedo -o vpm_fold_threshold vpm_fold_threshold = 80
Set: (The new value is actif instantly)
[root@aix01]/root# schedo -o vpm_fold_threshold=80 Setting vpm_fold_threshold to 80 Warning: a restricted tunable has been modified