Intel® Ethernet Controller E810 Application Device Queues (ADQ)
Configuration Guide
netperf Client
The following variables are used in the examples in this section:
Using ADQ Setup Script
The ADQ Setup script allows you to quickly and easily configure required ADQ parameters such as traffic classes, priority, filters, and ethtool parameters etc.
- To configure ADQ, run the following command:
adqsetup --dev=$iface –-priority=skbedit --busypoll=$bp --busyread=$br create $file_name mode shared \ queues $num_queues_tc1 ports $portrange addrs $addr See Notes below for customizing the ADQ configuration. Once ADQ is configured by adqsetup, start the netperf client.
Notes: - The example command above creates both ingress (Rx) and egress (Tx) filters, and Linux cgroups are not needed to be created and can be skipped (cgroup is only needed if
--priority=skbedit was NOT specified in adqsetup command). - ADQ setup script handles symmetric queues and affinity.
- Set the transmit and receive interrupt coalescing values to
--rxadapt=off --rxusecs=0 --txadapt=off --txusecs=500 for improved ADQ performance. - To configure independent pollers, add the
-pollers=$pollers parameter in the adqsetup command (and optionally--pollers_timeout ), and remove the flags to set global--busypoll=$bp --busyread=$br . - Use the
cpu parameter in the command to bind the independent pollers to specific CPU cores. Refer to ADQ Setup Using ADQ Setup Script for more information on pinning pollers to specific CPU cores. - The
--debug parameter is optional, but it is useful for obtaining complete stack trace. - For more information on how to use the script, refer to ADQ Setup Using ADQ Setup Script.
- The example command above creates both ingress (Rx) and egress (Tx) filters, and Linux cgroups are not needed to be created and can be skipped (cgroup is only needed if
-
Run traffic.
Start simultaneous netperf client instances.
Pinning the application to a CPU core on the local NUMA node provides more consistent results for ADQ. The following example uses the netperf -T option to select which core to use for each instance of the application.
numa=0 cpus_allowed=(`lscpu | grep node${numa} | cut -d: -f2 | sed -e 's/ //g' | sed -e 's/,/ /g'`) start_core=${cpus_allowed[$app_threads]} for ((i = 0; i < app_threads; i++)); do core=$(( $i * 2 + $start_core )) netperf -j -H $ipaddrserver -t \ TCP_RR -T $core -l ${testtime} -- -k \ THROUGHPUT,MIN_LATENCY,MAX_LATENCY,P50_LATENCY,P90_LATENCY,\ P99_LATENCY,STDDEV_LATENCY,MEAN_LATENCY -P $((app_port + i)),$((app_port + i)) & done Note:Use netperf -h for parameter details. Reference here: https://hewlettpackard.github.io/netperf/doc/netperf.html#TCP_005fRR
- Verify ADQ traffic is on the correct queues.
While ADQ application traffic is running, watch ethtool statistics to check that only the ADQ queues are being used (have significant traffic) with busy poll (pkt_busy_poll) for ADQ traffic. If non busy poll (pkt_not_busy_poll) have significant counts and/or if traffic is not confined to ADQ queues, recheck the configuration steps carefully.
Example:
watch -d -n 0.5 "ethtool -S $iface | grep busy | column" See Verify ADQ Application Traffic and Independent Pollers (If Applicable) for example watch output.