Intel® Ethernet 800 Series

Linux Performance Tuning Guide

ID Date Version Classification
636781 10/23/2023 Public
Document Table of Contents

​IRQ Affinity

Configuring IRQ affinity to ensure interrupts for the hardware Tx/Rx queues are affinitized to proper CPU cores can have a huge impact on performance, particularly in multi-thread workloads. Proper queue affinitization is even more important when configurations include multiple high-speed ports. Intel has a utility to configure IRQ affinity, the set_​irq_​affinity script, which is included with ice source package.

The efficiency benefit if pinning to local CPUs versus all cores varies between different workloads.

Using the set_​irq_​affinity script from the ice source package (recommended):

  • ​​To use all cores: <path-to-ice-package>/scripts/set_irq_affinity -X all ethX
  • ​​To use only cores on the local NUMA socket: <path-to-ice-package>/scripts/set_irq_affinity -X local ethX
  • ​​You can also select a range of cores. Avoid using cpu0 because it runs timer tasks. <path-to-ice-package>/scripts/set_irq_affinity -X 1-8,16-24 ethX
Note:​The affinity script enables Transmit Packet Steering (XPS) as part of the pinning process when the -x option is specified. When XPS is enabled, Intel recommends that you disable irqbalance, as the kernel balancer with XPS can cause unpredictable performance.

The affinity script disables XPS when the -X option is specified. Disabling XPS and enabling symmetric queues is beneficial for workloads where best performance is achieved when Tx and Rx traffic get serviced on the same queue pair(s). For more details, see ​Transmit and Receive Queue Alignment.

The effect of the irqbalance in your configuration depends on the workload and platform configuration. However, when irqbalance is enabled it can cause unpredictable performance measurements, as the kernel can move work to different cores.

Disabling user-space IRQ balancer to enable queue pinning: systemctl disable irqbalance systemctl stop irqbalance

Manually configuring IRQ affinity:

  • ​​Find the processors attached to each node using: numactl --hardware lscpu
  • ​​Find the bit masks for each of the processors:

    Assuming cores 0-11 for node 0: [1,2,4,8,10,20,40,80,100,200,400,800]

  • ​​Find the IRQs assigned to the port being assigned: grep ethX /proc/interrupts

    and note the IRQ values. For example, 181-192 for the 12 vectors loaded.

  • ​​Echo the SMP affinity value into the corresponding IRQ entry. Note that this needs to be done for each IRQ entry: echo 1 > /proc/irq/181/smp_affinity echo 2 > /proc/irq/182/smp_affinity echo 4 > /proc/irq/183/smp_affinity

Showing IRQ affinity:

  • ​​To show the IRQ affinity for all cores: <path-to-ice-package>/scripts/set_irq_affinity -s ethX
  • ​​To show only cores on the local NUMA socket: <path-to-ice-package>/scripts/set_irq_affinity -s local ethX
  • ​​You can also select a range of cores: <path-to-ice-package>/scripts/set_irq_affinity -s 0-8,16-24 ethX
Note:​The set_​irq_​affinity script supports the -s flag in ice driver version 1.6.X and later.