Intel® Ethernet Controller E810 Application Device Queues (ADQ)

Configuration Guide

ID 609008
Date 04/03/2023
Version 2.8
Document Table of Contents

Memcached/rpc-perf Clients (non-ADQ) Configuration

The following variables are used in the examples in this section:

$ipaddr The IP Address of the interface under test.
$threads The number of connections per rpc-perf instance.
$duration Test time.
  1. Perform general system OS install and setup.

    Minimum required kernel for Memcached/rpc-perf (non-ADQ) clients is v4.19.18.

    Complete Install OS and Update Kernel (If Needed). Update the Kernel on each of the rpc-perf client systems, using kernel v4.19.18 or later.

  2. Perform rpc-perf build:
    1. Download the rpc-perf release on each client system. git clone https://github.com/twitter/rpc-perf.git
    2. Install rpc-perf on each client system. cargo build --release cp rpc-perf/target/release/rpc-perf /usr/local/bin chmod +x /usr/local/bin/rpc-perf
  3. Perform Memcached/rpc-perf benchmarking.

    A test run would be one rpc-perf instance per physical client. The sum of the connections that can be spawned for all clients/instances should be greater than or equal to the total number of TC1 queues available at the SUT/Memcached server.

    For example, in the case of a Memcached server with 110 TC1 queues and 10 physical client systems, there would be 10 instances of rpc-perf (one running on each physical client) and 11 threads for each instance.

    Other parameters would include 8-byte key, 64-byte value, 100% read, and 20 million total keys.

    We can start a rpc-perf instance on each client via a command line parameter.

    Example:

    rpc-perf --config rpc-perf-memcached.toml --endpoint $ipaddr:$port --interval $duration --clients $threads

    This needs to be concurrently started on all clients. We can also use the TOML file to configure these values instead of command line.

    Example TOML file rpc-perf-memcached.toml (replace <threads> with the number of client threads per instance and <interval> with the length of time of each test):

    [general] protocol = "memcache" interval = <interval> # seconds windows = 1 # run for 1 intervals clients = <threads> poolsize = 1 # each client has 1 connection per endpoint tcp_nodelay = false # do not enable tcp_nodelay request_timeout = 1_000_000 # microseconds connect_timeout = 1_000_000 # microseconds [[keyspace]] length = 8 # 8 byte keys count = 20000000 # limit to 20M keys weight = 1 # this keyspace has a weight of 1 commands = [ # get:set ratio is 1 {action = "get", weight = 0}, {action = "set", weight = 1}, ] values = [ # value length will always be 64 bytes {length = 64, weight = 1}, ]
  4. Verify that ADQ traffic is on the correct queues.

    While ADQ application traffic is running, watch ethtool statistics to check that only the ADQ queues are being used (have significant traffic) with busy poll (pkt_​busy_​poll) for ADQ traffic. If non busy poll (pkt_​not_​busy_​poll) have significant counts and/or if traffic is not confined to ADQ queues, recheck the configuration steps carefully.

    watch -d -n 0.5 "ethtool -S $iface | grep busy | column"

    See Verify ADQ Application Traffic and Independent Pollers (If Applicable) for example watch output.

Note:The ADQ Setup script clears the existing configuration before proceeding with the new ADQ configuration. To clear manually, follow the steps in Clear the ADQ Configuration.