Intel® Ethernet Adapters and Devices User Guide

ID Date Version Classification
705831 12/05/2024 Public
Document Table of Contents

Data Center Bridging (DCB)

Data Center Bridging (DCB) is a collection of standards-based extensions to classical Ethernet. It provides a lossless data center transport layer that enables the convergence of LANs and SANs onto a single unified fabric.

Furthermore, DCB is a configuration Quality of Service implementation in hardware. It uses the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 different priorities that traffic can be filtered into. It also enables priority flow control (802.1Qbb) which can limit or eliminate the number of dropped packets during network stress. Bandwidth can be allocated to each of these priorities, which is enforced at the hardware level (802.1Qaz).

DCB includes the following capabilities:

  • Priority-based flow control (PFC; IEEE 802.1Qbb)

  • Enhanced transmission selection (ETS; IEEE 802.1Qaz)

  • Congestion notification (CN)

  • Extensions to the Link Layer Discovery Protocol (LLDP) standard (IEEE 802.1AB) that enable Data Center Bridging Capability Exchange Protocol (DCBX)

Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and 802.1Qaz respectively.

There are two supported versions of DCBX:

  • CEE Version

  • IEEE Version

Note:

The OS DCBX stack defaults to the CEE version of DCBX, and if a peer is transmitting IEEE TLVs, it will automatically transition to the IEEE version.

For more information on DCB, including the DCB Capability Exchange Protocol Specification, go to http://www.ieee802.org/1/pages/dcbridges.html

Configuring DCB for Windows*

To change this setting in Intel® PROSet:

This setting is found on the Data Center tab of the device’s Device Manager property sheet or in the Data Center panel in Intel® PROSet Adapter Configuration Utility (Intel® PROSet ACU).

You can use Intel PROSet to perform the following tasks:

  • Display status:

    • Enhanced Transmission Selection

    • Priority Flow Control

    • Non-operational status: If the Status indicator shows that DCB is non-operational, there may be a number of possible reasons:

      • DCB is not enabled - select the checkbox to enable DCB.

      • One or more of the DCB features is in a non-operational state.

      A non-operational status is most likely to occur when Use Switch Settings is selected or Using Advanced Settings is active. This is generally a result of one or more of the DCB features not getting successfully exchanged with the switch. Possible problems include:

      • One of the features is not supported by the switch.

      • The switch is not advertising the feature.

      • The switch or host has disabled the feature (this would be an advanced setting for the host).

  • Disable/enable DCB

  • Troubleshooting information

Note:
  • On X710 based devices running Microsoft Windows, DCB is only supported on NVM version 4.52 and newer. Older NVM versions must be updated before the adapter is capable of DCB support in Windows.

  • If *QOS/DCB is not available, it may be for one of the following reasons:

    • The Firmware LLDP (FW-LLDP) agent was disabled from a pre-boot environment (typically UEFI).

    • This device is based on the Intel® Ethernet Controller X710 and the current link speed is 2.5Gbps or 5Gbps.

Hyper-V (DCB and VMQ)

Note:

Configuring a device in the VMQ + DCB mode reduces the number of VMQs available for guest OSes.

DCB for Linux*

Intel Ethernet drivers support firmware-based or software-based DCBX in Linux, depending on the underlying PF device. The following table summarizes DCBX support by driver.

Linux Driver

Firmware-Based DCBX

Software-Based DCBX

ice

Supported

Supported

i40e

Supported

Supported

ixgbe

Not supported

Supported

In firmware-based mode, firmware intercepts all LLDP traffic and handles DCBX negotiation transparently for the user. In this mode, the adapter operates in “willing” DCBX mode, receiving DCB settings from the link partner (typically a switch). The local user can only query the negotiated DCB configuration.

In software-based mode, LLDP traffic is forwarded to the network stack and user space, where a software agent can handle it. In this mode, the adapter can operate in either “willing” or “nonwilling” DCBX mode and DCB configuration can be both queried and set locally. Software-based mode requires the FW-based LLDP Agent to be disabled and kernel CONFIG_​DCB enabled.

Note:
  • Only one LLDP/DCBX agent can be active on a single interface at a time.

  • Software-based and firmware-based DCBX modes are mutually exclusive.

  • When the firmware DCBX agent is active, software agents will not be able to receive or transmit LLDP frames. See Firmware Link Layer Discovery Protocol (FW-LLDP), as well as the Linux driver readme in your installation, for information on enabling or disabling the FW-LLDP agent.

  • In software-based DCBX mode, you can configure DCB parameters using software LLDP/DCBX agents that interface with the Linux kernel’s DCB Netlink API. We recommend using OpenLLDP as the DCBX agent when running in software mode. For more information, see the OpenLLDP man pages and https://github.com/intel/openlldp.

  • For information on configuring DCBX parameters on a switch, please consult the switch manufacturer’s documentation.

iSCSI Over DCB

Intel Ethernet adapters support iSCSI software initiators that are native to the underlying operating system. Data Center Bridging is most often configured at the switch. If the switch is not DCB capable, the DCB handshake will fail but the iSCSI connection will not be lost.

Note:

DCB does not install in a VM. iSCSI over DCB is only supported in the base OS. An iSCSI initiator running in a VM will not benefit from DCB Ethernet enhancements.

Configuring iSCSI Over DCB in Windows

iSCSI installation includes the installation of the iSCSI DCB Agent (iscsidcb.exe) user mode service. The Microsoft iSCSI Software Initiator enables the connection of a Windows host to an external iSCSI storage array using an Intel Ethernet adapter. Please consult your operating system documentation for configuration details.

To change this setting in Intel PROSet:

This setting is found on the Data Center tab of the device’s Device Manager property sheet or in the Data Center panel in Intel PROSet ACU.

This setting provides feedback as to the DCB state, operational or non-operational, as well as providing additional details should it be non-operational.

Note:

On Microsoft Windows Server, if you configure Priority using IEEE, the iSCSI policy may not be created automatically. To create the iSCSI policy manually, use Powershell* and type:

New-NetQosPolicy -Name "UP4" -PriorityValue 8021 Action 4 -iSCSI

Using iSCSI over DCB with Intel ANS Teaming

The Intel® iSCSI Agent is responsible for maintaining all packet filters for the purpose of priority tagging iSCSI traffic flowing over DCB-enabled adapters. The iSCSI Agent will create and maintain a traffic filter for an Intel ANS Team if at least one member of the team has an “Operational” DCB status. However, if any adapter on the team does not have an “Operational” DCB status, the iSCSI Agent will log an error in the Windows Event Log for that adapter. These error messages are to notify the administrator of configuration issues that need to be addressed, but do not affect the tagging or flow of iSCSI traffic for that team, unless it explicitly states that the TC Filter has been removed.

See Adapter Teaming for more information on Intel ANS teams.

Configuring iSCSI Over DCB in Linux

In the case of Open Source distributions, virtually all distributions include support for an Open iSCSI Software Initiator and Intel® Ethernet adapters will support them. Please consult your distribution documentation for additional configuration details on their particular Open iSCSI initiator.

Intel® 82599-based adapters support iSCSI within a Data Center Bridging cloud. Used in conjunction with switches and targets that support the iSCSI/DCB application TLV, this solution can provide guaranteed minimum bandwidth for iSCSI traffic between the host and target. This solution enables storage administrators to segment iSCSI traffic from LAN traffic. Previously, iSCSI traffic within a DCB supported environment was treated as LAN traffic by switch vendors. Please consult your switch and target vendors to ensure that they support the iSCSI/DCB application TLV.