Intel® Ethernet Controller E810 Application Device Queues (ADQ)

Configuration Guide

ID 609008
Date 04/03/2023
Version 2.8
Document Table of Contents

Installation and Configuration - Both Systems

The following variable is used in the examples in this section:

$iface The interface in use (PF).
$bridge_​iface Bridge interface created on PF (ex: br0).
$bridge_​ipadress IP address of the bridge interface.
$num_​queues_​tc0 The number of queues for default traffic class (usually 2 or more but needs to be a power of 2 or it is not accepted).
$num_​queues_​tc1 The number of queues for application traffic class 1 (corresponds to the user selected number of application threads for application 1). Note power of 2 restrictions below.
$dst_​mac MAC address of the VM's virtio interface.
$ipaddress VM IP address (ex: virtio-net interface IP configured in VM).
$queue_​id The specific rx queue number in hexadecimal (ex: 0x3 or 0xa or 0xb etc)
$Tx_​queue_​id The specific Tx queue number (ex: 3 or 4 or 5 etc)
$qps_​per_​poller The number of queue pairs per independent poller for a given TC (max value is #queues in the TC)
$iface_​bdf The network interface BDF notation (Bus:Device.Function) used by devlink (PF)
$poller_​timeout The timeout value for the independent pollers for a given TC (nonzero integer value in jiffies, default value 10000)
$pathtoicepackage The path to the ice driver package

  1. Enable Virtualization in the BIOS. The following settings must be enabled in the BIOS. [BIOS::Advanced::Processor Configuration] Intel(R) Virtualization Technology=Enabled [BIOS::Advanced::Integrated IO Configuration] Intel(R) VT for Directed I/O=Enabled [BIOS::Advanced:: Processor Configuration ] Intel(R) Hyper-Threading Tech=Enabled
  2. [Optional] Enable the following BIOS settings on both servers for better performance with VirtIO.

    Processor SettingsLogical ProcessorEnabled

    System Profile SettingsSystem ProfilePerformance

  3. Verify the CPU Virtualization extensions are available. grep -E 'svm|vmx' /proc/cpuinfo

    Where:

    vmx: Entry indicating an Intel processor with the Intel VT extensions.

    svm: Entry indicating an AMD processor with the AMD-V extensions.

  4. Verify kvm and kvm_​intel modules are loaded in the kernel, and if not load manually. lsmod | grep -i kvm
  5. Install pre-requisites. yum install -y libvirt yum install -y bridge-utils
  6. Enable and start the libvirtd service systemctl enable libvirtd; systemctl start libvirtd Note:These settings are not persistent across system reboots.
  7. Configure network settings.

    Note:Remove the virtual bridge network interface virbr0, which is created by default. ip link set virbr0 down ip link delete dev virbr0 Note:You can use either the Linux bridge or the OVS bridge.
    1. Network Settings Using Linux Bridge:
      1. Create the Linux bridge. Note:There are many methods to create bridge network using virsh, nmcli, or editing network scrips etc. Here, we used IP commands to configure the bridge. Note:If the PF interface has an IP address, remove it. ip link add name $bridge_iface type bridge ip link set $iface master $bridge_iface ip address add dev $bridge_iface $bridge_ipadress ip link set up $iface ip link set up $bridge_iface
      2. Check if the bridge interface is active. virsh net-list --all

        If $bridge_​iface is not listed, the following commands create the bridge and make it active.

        a. cd /tmp/ b.vi bridge.xml <network> <name>$bridge_iface </name> <forward mode="bridge"/> <bridge name="$bridge_iface" /> </network> c. virsh net-define /tmp/bridge.xml d. virsh net-start $bridge_iface e. virsh net-autostart $bridge_iface f. virsh net-list –all
    2. Network Settings Using Open Vswitch (OVS):

      1. Download and untar OVS source package.

        mkdir /opt/ovs wget -O /opt/ovs/ovs.tar.gz https://github.com/openvswitch/ovs/archive/v2.12.0.tar.gz cd /opt/ovs ; tar -xf ovs.tar.gz cd ovs-2.12.0
      2. Start bootstraping OVS. ./boot.sh
      3. Configure Open Vswitch default database directory using options as shown here. ./configure --prefix=/usr --localstatedir=/usr/local/var --sysconfdir=/etc Note:Open vSwitch installed with .rpm (e.g., through yum install or rpm - ivh) and .deb (e.g., through apt-get install or dpkg -i) packages and use these configure options.
      4. Install Open Vswitch. make make install
      5. Configure a database for ovsdb-server to use. mkdir -p /usr/local/var/run/openvswitch mkdir -p /usr/local/etc/openvswitch /opt/ovs/ovs-2.12.0/ovsdb/ovsdb-tool create /etc/openvswitch/conf.db \ /usr/local/share/openvswitch/vswitch.ovsschema
      6. Configure ovsdb-server to use database created above, to listen on a Unix domain socket, to connect to any managers specified in the database itself, and to use the SSL configuration in the database. ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ --private-key=db:Open_vSwitch,SSL,private_key \ --pidfile –detach
      7. Initialize the database using ovs-vsctl and start the Open Vswitch daemon. /usr/bin/ovs-vsctl --no-wait init /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:hw-offload=true /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:tc-policy=none ovs-vswitchd --pidfile --detach --mlockall --log-file=/tmp/ovs-vswitchd.log
      8. Set up bridge network using ovs-ctl. ovs-vsctl add-br $bridge_iface ovs-vsctl add-port $bridge_iface $iface ip addr add $bridge_ipadress dev $bridge_iface ip link set $bridge_iface up
  8. Configure VM using GUI or CLI. Note:VMs can be configured using either the virt-manager (GUI) or the Virsh commands (CLI).

    VM-level System Tunings

    1. Enable virtual-guest tuned profile.

      This profile decreases virtual memory swappiness values and increases disk readahead values.

      tuned-adm profile virtual-guest
    2. Set busy_​read setting for virtual guests.

      Recommended 100 µs for a starting point to stay in poll mode.

      sysctl -w net.core.busy_read=100
    3. Disable firewalls. service firewalld stop; systemctl mask firewalld
    4. Disable Security-Enhanced Linux (SELinux) (requires reboot to take affect).

      Change SELINUX=enforcing to SELINUX=disabled in /etc/selinux/config

    5. Stop the irqbalance service. systemctl stop irqbalance Note:When assigning the network to the VM, choose the network source as the bridge interface created and virtio as the device model. Note:Check that the host bridge network and the VM's virtio-net interface are on the same subnet and ping each other to confirm connectivity between the host and the VM.
  9. Perform general system setup at host level.

    Complete the ADQ installation and setup in ADQ System Under Test (SUT) Installation and in General System Tuning on both the server and client systems

    Note:Many settings in General System Tuning and Adapter Preparation do not persist between reboots and might have to be reapplied. Note:Recheck the bridge settings when unloading/loading the ICE driver. Note:Virtio-net-based VMs do not require VF HW configuration, and ADQ acceleration is possible with kthread-based independent polling, which is supported starting with kernel 5.12 or higher. The global settings listed below should be disabled in Independent poller-based ADQ. sysctl -w net.core.busy_poll=0 sysctl -w net.core.busy_read=0 ethtool --set-priv-flags $iface channel-pkt-inspect-optimize off
  10. Create Traffic classes on the PF interface (the interface which is participated in the bridge configuration). ${pathtotc}/tc qdisc add dev $iface root mqprio num_tc 2 map 0 1 queues $num_queues_tc0@0 $num_queues_tc1@$num_queues_tc0 hw 1 mode channel ${pathtotc}/tc qdisc add dev $iface clsact Note:Due to timing issues, applying TC filters immediately after the tc qdisc add command might result in the filters not being offloaded in hardware. An error in dmesg is logged if the filter fails to add properly. It is recommended to wait five seconds after tc qdisc add before adding TC filters. Sleep 5
  11. Create TC filters (Both ingress and egress) on the PF interface.
    • Ingress Filters (one Filter Per VM):

      ${pathtotc}/tc filter add dev $iface protocol ip ingress prio 1 flower dst_mac $dst_mac dst_ip $ipaddress/32 skip_sw classid ffff:$queue_id

      Whereas:

      ffff:Qdisc ID(Fixed value, can get using the command : tc qdisc show dev $iface clsact)

    • Egress Filters (one Filter per VM):

      ${pathtotc}/ tc filter add dev $iface egress prio 1 protocol ip flower src_ip $ipaddress action skbedit priority 0x1 pipe action skbedit queue_mapping $Tx_queue_id

      Whereas:

      src_​IP: VM IP address

      queue_​mapping: Aligns TX with RX queue ID

      Note:Cgroup is not supported in virtio. Instead, per-TC skbedit socket priority option should be used for egress traffic.
  12. Enabled Number of pollers, poller time out and threaded NAPI iface_bdf=$(ethtool -i ${iface} | grep bus-info | awk '{print $2}') devlink dev param set pci/$iface_bdf name tc1_qps_per_poller value $qps_per_poller cmode runtime devlink dev param set pci/$iface_bdf name tc1_poller_timeout value $poller_timeout cmode runtime echo 1 /sys/class/net/$iface/threaded
  13. Set irq_​affinity. ${pathtoicepackage} /set_irq_affinity -X all $iface ${pathtoicepackage} /scripts/set_xps_rxqs $iface