Intel® Ethernet 800 Series Application Device Queues (ADQ)

Software Developer's Guide

ID Date Version Classification
626536 04/03/2023 Public
Document Table of Contents

Selecting Applications for ADQ Acceleration

ADQ 1.0 (ice driver version 1.8.X and earlier) includes all functional support and enhancements for ADQ acceleration. It is compatible with both multithreaded and single-threaded applications by using NAPI ID/cBPF socket option.

  • NAPI ID socket option for multithreaded applications such as Memcached or NGINX
  • cBPF socket option for single-threaded applications such Netperf or Redis.

ADQ2.0 (ice driver version 1.9.X and later) supports all the ADQ enhancements introduced in ADQ1.0, as well as the ability to configure ADQ acceleration for a wider range of cloud to edge applications and environments such as:

  • Containers using veth network interfaces
  • VMs using virtio-net
  • Network-Intensive Content Delivery Networks (CDN) workloads

For example, ADQ1.0 accelerated applications such as memcached, nginx, redis, and others will not work if deployed in containers or Virtio-net based VMs because the PF is not visible from a container OR Virtio-net based VMs. As a result, they cannot start busy polling from the application and ADQ 2.0 is an alternative method for accelerating applications via a softirq or a kernel napi thread.

Applications running in SGX enclaves will incur an expensive exit if they make system calls and require an additional core to handle these system calls. AF_​XDP applications that use an independent poller (ADQ2.0) can avoid these system calls for tx/rx.

Because ADQ uses the full Linux kernel stack and standard network interfaces, it can potentially impact a large set of commonly used applications. Every application architecture is unique, meaning that adapting an application to take advantage of ADQ varies from application to application. When evaluating applications for ADQ enablement, here are some key points to consider:

  • ADQ is currently available only in Linux-based environments with the prerequisite kernel capabilities.
  • ADQ improves predictability as well as average latency, so applications that are sensitive to tail latency (such as scale-out data center applications) can significantly benefit from using ADQ.
  • ADQ works particularly well (although not exclusively) for transaction-oriented applications that use both epoll and poller thread based packet processing.
  • ADQ's bandwidth control capability can be useful for applications where a specific application's egress traffic needs to be guaranteed a minimum transmit rate or restricted to a maximum transmit rate.

The primary performance benefits of both ADQ methods are:

  • ADQ1.0 is the acceleration of packet processing via kernel busy polling of a network adapter hardware queue. When enabled in the kernel configuration parameters, busy polling in the kernel is automatically triggered using one of the following two syscalls:
    • epoll_​wait (...), and the data is returned from the epoll object using a read.
    • blocking read / recv on a given socket, if the socket receive queue is empty.

    Figure: Example Setup for Intel® Ethernet 800 Series Network Adapters with ADQ1.0 illustrates Application 1, Data/Key Caching and the Database Backend using epoll, while Application 2 uses the blocking read/recv syscall method.

  • ADQ2.0 enables application agnostic ADQ acceleration by polling device queues directly to minimize interrupts, socket queue context switches, and application threads going through sleep/wakeup cycles.
    • napi_​poll() routine that runs as a softirq or a kernel napi thread.

    Figure: Example Setup for Intel® Ethernet 800 Series Network Adapters with ADQ2.0 illustrates how the system polls the NIC device queues and pushes packets to the application socket queue using independent poller threads.