# chdev -dev <ENT> -attr jumbo_frames=no flow_ctrl=yes large_receive=yes large_send=yes rxdesc_que_sz=4096
jumbo_frames - depends on environment. If you can increase the MTU and set jumbo_frames, do it! It should increase network throughput, decrease CPU usage and increase network buffer needs flow_ctrl - flow control allows for better management of network throughput between the switch and power system. Thanks to flow control your unlikely to exhaust device buffers. This setting should also be applied on the network switch! large_receive and large_send - those attributes allow TCP network stack to process large network message in one call instead of splitting it into smaller ones according to MTU rxdesc_que_sz - Receive descriptor queue size - if set too low can lead to DMA Overrun and No Resource Errors when looking at entstat due to device buffers not being able to handle high network workload
# chdev -l <VETH> -a max_buf_huge=128 max_buf_large=128 max_buf_medium=1024 max_buf_small=4096 max_buf_tiny=4096 # chdev -l <VETH> -a min_buf_huge=64 min_buf_large=64 min_buf_medium=256 min_buf_small=2048 min_buf_tiny=2048
Increasing the buffers on Virtual Ethernet adapters allow them to handle more packets at the same time. If left at default values they tend to be exhausted quickly causing Hypervisor Receive Failures and Hypervisor Send Failures. It is important to remember that if largesend and large_receive is set the system will be using different types of buffers to handle packets thus those tunables are highly dependent on environment and network workload.
# chdev -dev <SEA> -attr jumbo_frames=no large_receive=yes largesend=1
Exactly the same as in case of physical adapter. Allowing largesend and large_receive on Shared Ethernet Adapter will allow handling of packets bigger than standard MTU sized on in one system call. In general SEA options should be corresponding to the underlying physical adapter.
# chdev -l <VETH> -a max_buf_small=4096 max_buf_medium=1024 max_buf_large=128 max_buf_huge=128
Those settings have exactly the same meaning as in case of VIOS tuning.
# chdev -l <VETH_IF> -a mtu_bypass=yes
The mtu_bypass tunable parameter is representation of large_send and large_receive from Ethernet adapters on interface level.
# no -p -o tcp_sendspace=262144 # no -p -o tcp_recvspace=262144 # no -p -o udp_sendspace=65536 # no -p -o udp_recvspace=655360 # no -p -o tcp_nodelayack=0 # no -p -o rfc1323=1
Above tunables are well described in AIX documentation (for example: no -h tunable). They depend heavily on environment being tuned. The tcp_nodelayack is set like this to reduce CPU usage by delaying ACK's. NOTE that disabling this option might also have negative performance impact depending on the architecture (if immediate ACK's are required)