Intel® Ethernet 800 Series

Linux Performance Tuning Guide

ID Date Version Classification
636781 07/19/2025 Public
Document Table of Contents

​Transmit Scheduling Layers

The 800 Series devices allow you to enable a Transmit Scheduling Layers feature to improve transmit queue fairness under certain conditions. When the feature is enabled (value set to 5), you should experience more consistent transmit performance across queues and/or PFs and VFs. The default is a 9-layer Tx Scheduler.

Note: This feature was previously referred to as "Tx Balancing" in the out-of-tree ice driver. With ice 1.17.X and later this feature has been renamed to "Tx Scheduling Layers" to align with upstream Linux.

The default Tx queue assignment is allocated in groups of eight at the highest layer and eight VFs per parent node at the VSI layer. Tx operations are scheduled at each layer across each of the nodes in the layer.

Example tree with Tx Scheduling Layers set to 9 (default), using a single 800 Series port with 80 queues.

  • ​​Minimum guaranteed bandwidth is equally allocated based on the grouping at each layer.
  • ​​Maximum bandwidth depends on location of adjacent threads.

​Default Tx Tree

With Transmit Scheduling Layers set to 5, the Tx tree architecture is flattened to allow up to 512 queues allocated to the highest level and up to 64 VFs in a parent node. This change does not reduce the overall number of queues the device supports.

Example tree with Tx Scheduling Layers set to 5, using a single 800 Series port with 80 queues.

  • ​​Minimum guaranteed bandwidth is equally allocated across all queues.
  • ​​Maximum bandwidth depends on location of adjacent threads.

​Tx Scheduling Layers = 5 (Default = 9)

By default, Transmit Scheduling Layers is set to a value of 9 in the NVM. Setting Transmit Scheduling Layers to 5 is recommended in environments where highly-scaled workloads are present and uneven transmit performance is seen between threads or VM groups. Not all configurations and workloads see the same benefit from setting Transmit Scheduling Layers to 5, as the feature is highly dependent on the number of Tx queues, how they get mapped out by the default Tx tree scheduler, workload and Tx traffic flows, HT on/off, VFs, and so on. If uneven Tx bandwidth is observed in your environment, it is recommended to set the Transmit Scheduling Layers to 5 feature (previously txbalance feature = true) to see if it helps with more even distribution of Tx traffic flows.

To enable the Transmit Scheduling Layers feature, use one of the following methods to persistently change the setting for the device:

  • ​​Use the Ethernet Port Configuration Tool (EPCT) to set 5-layer Tx Scheduler Layers (for ice driver 1.16.x and earlier: enable the tx_​balancing feature). Refer to the EPCT README file for more information.
  • Set Transmit Scheduling Layers to a value of 5 in ​​system BIOS or in UEFI HII (for ice 1.16.x and earlier, enable the "Transmit Balancing" feature).
  • ​​Set Transmit Scheduling Layers to a value of 5 (txbalancing = true with ice 1.16.x and earlier) via Linux devlink command (see below).

When the driver loads, it reads the Transmit Scheduling Layers setting from the NVM and configures the device accordingly.

Notes:
  • ​The user selection for Transmit Scheduling Layers in EPCT, system BIOS, HII, or Linux devlink is a persistent setting in the firmware of the device.
  • ​You must reboot the system for the selected setting to take effect.
  • ​This setting is physical interface (PF) specific.
  • ​The driver, NVM, and DDP package must all support this functionality to enable the feature. If an earlier version of the driver or NVM are installed that do not support the Transmit Scheduling Layers (previously Transmit Balancing feature), the firmware loads the default settings (with tx_​schedular_​layers set to 9, or transmit balancing disabled, based on version of ice driver).

Requirements

  • ​​Kernel with devlink params support
  • ​​800 Series network adapter
  • ​​E810:
    • NVM v4.00 or later
    • ice driver version 1.9.11 or later (txbalancing)
    • ice driver version 1.17.x or later (tx_​scheduling_​layers)
  • ​​E830:
    • NVM 1.00 or later
    • ice driver 2.1.8 or later (tx_​scheduling_​layers)

Configuration

  • ​​To set the Transmit Scheduling Layers feature via devlink:

    ice 1.16.x and earlier (txbalancing): devlink dev param set <pci/D:b:d.f> name txbalancing value <setting> cmode permanent

    ice 1.17.x and later (tx_​scheduling_​layers): devlink dev param set <pci/D:b:d.f> name tx_scheduling_layers value <5/9> cmode permanent

  • Where:
    • <pci/D:b:d.f> is the PCI address of the PF.
  • ​​Reboot the system for the changes to the Transmit Scheduling Layers settings to take effect.
  • ​​To show the current Transmit Scheduling Layers setting:

    ice 1.16.x and earlier (txbalancing): devlink dev param show <pci/D:b:d.f> name txbalancing

    ice 1.17.x and later (tx_​scheduling_​layers): devlink dev param show <pci/D:b:d.f> name tx_scheduling_layers

  • ​​To view the device's transmit queue topology:
    1. ​Clear dmesg: dmesg -C
    2. ​Get the Tx tree specific device port using the PCI address xx:xx.x. Use ethtool -i to find PCIe bus details for the port. echo get scheduling tree topology > /sys/kernel/debug/ice/0000:xx:xx.x/command
    3. ​Pipe dmesg to output.txt: dmesg > output.txt

      The specific output depends on the VSI instances that have been configured in your environment. Note the values of the Node, Parent, and num_​children. With the default configuration the num_​children max allocation is eight per parent. With the feature enabled, the limit is 512. grep Node output.txt