Intel® Ethernet Adapters and Devices User Guide

ID Date Version Classification
705831 12/05/2024 Public
Document Table of Contents

Remote Direct Memory Access (RDMA)

Remote Direct Memory Access, or RDMA, allows a network device to transfer data directly to and from application memory on another system, increasing throughput and lowering latency in certain networking environments.

  • Intel® Ethernet 800 Series devices support both iWARP and RoCEv2.

  • Intel® Ethernet X722 Series devices only support iWARP.

The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses UDP.

On devices with RDMA capabilities, RDMA is supported on the following operating systems:

  • Linux*

  • FreeBSD*

  • VMware* ESXi*

  • Microsoft* Windows Server*

To avoid performance degradation from dropped packets, enable link level flow control or priority flow control on all network interfaces and switches.

Note:
  • On systems running a Microsoft Windows Server operating system, enabling *QoS/priority flow control will disable link level flow control.

  • Devices based on the Intel Ethernet 800 Series do not support RDMA when operating in multiport mode with more than 4 ports.

  • On Linux systems, RDMA and link aggregation (LAG, also known as bonding) are not compatible on most devices. If RDMA is enabled, bonding will not be functional.

    • On Intel Ethernet 810 Series devices, RDMA and LAG are compatible if all the following are true:

      • You are using an Intel Ethernet 810 Series device with the latest drivers and NVM installed.

      • RDMA technology is set to RoCEv2.

      • LAG configuration is either active-backup or active-active.

      • Bonding is between two ports within the same device.

      • The QoS configuration of the two ports matches prior to the bonding of the devices.

      See the README inside the Linux ice driver tarball for more information.

RDMA on Linux or FreeBSD

For Intel Ethernet devices that support RDMA on Linux or FreeBSD, use the drivers shown in the following table.

Device

Linux

FreeBSD

S upported P rotocols

Base Driver

RDMA Driver

Base Driver

RDMA Driver

Intel Ethernet 800 Series

ice

irdma

ice

irdma

RoCEv2, iWARP

Intel Ethernet X722 Series

i40e

irdma

ixl

not s upported

iWARP

Basic Installation Instructions

At a high level, installing and configuring RDMA on Linux or FreeBSD consists of the following steps. See the README file inside the appropriate RDMA driver tarball for full details.

  1. Install the base driver.

  2. Install the RDMA driver.

  3. Install and patch any user-mode RDMA libraries. Exact steps will vary by operating system; refer to the RDMA driver readme for details.

  4. Enable flow control on your device. Refer to the base driver README for details and supported modes.

  5. If you are using RoCE, enable flow control (PFC or LFC) on the device and endpoint your system is connected to. See your switch documentation and, for Linux, the Intel® Ethernet 800 Series Linux Flow Control Configuration Guide for RDMA Use Cases for details.

RDMA for Virtualized Environments in Linux

Devices based on the Intel Ethernet 800 Series support RDMA in a Linux VF on supported Windows or Linux hosts. Refer to the README file inside the Linux RDMA driver tarball for more information on how to load and configure RDMA in a Linux VF.

RDMA on Microsoft Windows

RDMA for Network Direct (ND) User-Mode Applications

Network Direct (ND) allows user-mode applications to use RDMA features.

Note:

User-mode applications may have prerequisites such as Microsoft HPC Pack or Intel MPI Library, refer to your application documentation for more details.

Supported Operating Systems

Intel® Ethernet User Mode RDMA Provider is supported on Microsoft Windows Server.

RDMA User Mode Installation

Follow the steps below to install user-mode Network Direct features:

  1. Download the software package you want for the release. See Install Windows* Drivers for more information.

    1. If you are installing via the complete driver pack:

      1. In the extracted files from the download, navigate to \APPS\PROSETDX and then the Windows subfolder corresponding to your version of Windows (32-bit or 64-bit).

      2. Inside the Winx64 or Win32 folder, double-click on DxSetup.exe to launch the install wizard.

    2. If you are installing via the separate webpacks for base drivers and Intel® PROSet:

      1. Download and extract the webpack for Intel PROSet.

      2. In the extracted files, double-click on the .exe file to launch the install wizard.

  2. On the Setup Options screen, select Intel® Ethernet User Mode RDMA Provider.

  3. On the RDMA Configuration Options screen, select Enable RDMA routing across IP Subnets if desired. Note that this option is displayed during base driver installation even if user mode RDMA was not selected, as this option is applicable to Network Direct Kernel functionality as well.

  4. If Windows Firewall is installed and active, select Create an Intel® Ethernet RDMA Port Mapping Service rule in Windows Firewall and the networks to which to apply the rule.

    Note:

    If Windows Firewall is disabled or you are using a third party firewall, you will need to add this rule manually.

  5. Continue with driver and software installation.

RDMA Network Direct Kernel (NDK)

RDMA Network Direct Kernel (NDK) functionality is included in the Intel base networking drivers and requires no additional features to be installed.

RDMA Routing across IP Subnets

If you want to allow NDK’s RDMA functionality across subnets, you will need to select Enable RDMA routing across IP Subnets on the RDMA Configuration Options screen during base driver installation. See Install Windows* Drivers for basic installation instructions.

Enabling Priority Flow Control (PFC) on a Microsoft Windows Server

To avoid performance degradation from dropped packets, enable priority flow control (PFC) or link level flow control on all network interfaces and switches.

Note:

On systems running a Microsoft Windows Server operating system, enabling *QoS/priority flow control will disable link level flow control.

Use the following PowerShell* commands to enable PFC on Microsoft Windows Server:

Install-WindowsFeature -Name Data-Center-Bridging -IncludeManagementTools

New-NetQoSPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3

Enable-NetQosFlowControl -Priority 3

Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7

New-NetQosTrafficClass -Name "SMB" -Priority 3 -BandwidthPercentage 60 -Algorithm ETS

Set-NetQosDcbxSetting -Willing $FALSE

Enable-NetAdapterQos -Name "Slot1 4 2 Port 1"

Verifying RDMA Operation with Microsoft PowerShell

You can check that RDMA is enabled on the network interfaces using the following Microsoft PowerShell command:

Get-NetAdapterRDMA

Use the following PowerShell command to check if the network interfaces are RDMA capable and multichannel is enabled:

Get-SmbClientNetworkInterface

Use the following PowerShell command to check if Network Direct is enabled in the operating system:

Get-NetOffloadGlobalSetting | Select NetworkDirect

Use netstat to make sure each RDMA-capable network interface has a listener at port 445 (Windows Client OSs that support RDMA may not post listeners). For example:

netstat.exe -xan | ? {$_ -match "445"}

RDMA for Virtualized Environments in Windows

To enable RDMA functionality on virtual adapter(s) connected to a VMSwitch, you must:

  • Enable SR-IOV (Single Root IO Virtualization) and VMQ (Virtual Machine Queues) advanced properties on each port.

  • Set the number of VFs to enable with RDMA capabilities. You can enable up to 32 VFs with RDMA capabilities.

Under certain circumstances, you may disable these settings by default. You can manually set these options in the Adapter Settings panel of Intel® PROSet Adapter Configuration Utility (Intel® PROSet ACU), in the Advanced tab of the adapter properties dialog box, or with the following PowerShell commands:

Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword *SRIOV -RegistryValue 1
Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword *VMQ -RegistryValue 1
Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword RdmaMaxVfsEnabled -RegistryValue <1-32>

Configuring RDMA Guest Support (NDK Mode 3)

NDK Mode 3 allows kernel mode Windows components to use RDMA features inside Hyper-V guest partitions. To enable NDK mode 3 on an Intel Ethernet device, do the following:

  1. Enable SR-IOV in your system’s BIOS or UEFI.

  2. Enable the SR-IOV advanced setting on the device.

  3. Enable SR-IOV on the VMSwitch bound to the device by performing the following for all physical functions on the same device:

    New-VMSwitch -Name <switch_name> -NetAdapterName <device_name> -EnableIov $true``
    
  4. Configure the number of RDMA virtual functions (VFs) on the device by setting the RdmaMaxVfsEnabled advanced setting. All physical functions must be set to the same value. The value is the maximum number of VFs that can be capable of RDMA at one time for the entire device. Enabling more VFs will restrict RDMA resources from physical functions (PFs) and other VFs:

    Set-NetAdapterAdvancedProperty -Name <device_name> -RegistryKeyword RdmaMaxVfsEnabled
      -RegistryValue <Value: 0 - 32>
    
  5. Disable all PF adapters on the host and re-enable them. This is required when the registry keyword RdmaMaxVfsEnabled is changed or when creating or destroying a VMSwitch:

    Get-NetAdapterRdma | Disable-NetAdapter
    Get-NetAdapterRdma | Enable-NetAdapter
    
  6. Create VM Network Adapters for VMs that require RDMA VF support:

    Add-VMNetworkAdapter -VMName <vm_name> -VMNetworkAdapterName <device_name> -SwitchName <switch_name>
    
  7. If you plan to use Microsoft Windows 10 Creators Update (RS2) or later on a guest partition, set the RDMA weight on the VM Network Adapter by entering the following command on the host:

    Set-VMNetworkAdapterRdma -VMName <vm_name> -VMNetworkAdapterName <device_name> -RdmaWeight 100
    
  8. Set SR-IOV weight on the VM Network Adapter (Note: SR-IOV weight must be set to 0 before setting the RdmaWeight to 0):

    Set-VMNetworkAdapter -VMName <vm_name> -VMNetworkAdapterName <device_name> -IovWeight 100
    
  9. Install the VF network adapter with the Intel PROSet Installer in the VM.

  10. Enable RDMA on the VF driver and Hyper-V Network Adapter using PowerShell in the VM:

    Set-NetAdapterAdvancedProperty -Name <device_name> -RegistryKeyword RdmaVfEnabled -RegistryValue 1
    Get-NetAdapterRdma | Enable-NetAdapterRdma
    

RDMA for NDK Features such as SMB Direct (Server Message Block)

NDK allows Windows components (such as SMB Direct storage) to use RDMA features.

Testing NDK: Microsoft Windows SMB Direct with DiskSPD

This section outlines the recommended way to test RDMA for Intel Ethernet functionality and performance on Microsoft Windows operating systems.

Note that since SMB Direct is a storage workload, the performance of the benchmark may be limited to the speed of the storage device rather than the network interface being tested. Intel recommends using the fastest storage possible in order to test the true capabilities of the network device(s) under test.

Test instructions:

  1. Set up and connect at least two servers running a supported Microsoft Windows Server operating system, with at least one RDMA-capable Intel Ethernet device per server.

  2. On the system designated as the SMB server, set up an SMB share. Note that the performance of the benchmark may be limited to the speed of the storage device rather than the network interface being tested. Storage setup is outside of the scope of this document. You can use the following PowerShell command:

    New-SmbShare -Name <SMBsharename> -Path <SMBsharefilepath> -FullAccess
      <domainname>\Administrator,Everyone
    

    For example:

    New-SmbShare -Name RAMDISKShare -Path R:\RAMDISK -FullAccess group\Administrator,Everyone
    
  3. Download and install the Diskspd Microsoft utility from here: https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223

  4. Using CMD or Powershell, use the cd command to change to the DiskSpd folder, and then run tests. (Refer to Diskspd documentation for more details on parameters)

    For example: Set the block size to 4K, run the test for 60 seconds, disable all hardware and software caching, measure and display latency statistics, leverage 16 overlapped IOs and 16 threads per target, random 0% writes and 100% reads and create a 10GB test file at \\<SMBserverTestIP>\<SMBsharename>\test.dat:

    .\diskspd.exe -b4K -d60 -h -L -o16 -t16 -r -w0 -c10G \\<SMBserverTestIP>\<SMBsharename>\test.dat
    
  5. Verify that RDMA traffic is running using perfmon counters such as “RDMA Activity” and “SMB Direct Connection”. Refer to Microsoft documentation for more details.

RDMA Windows Performance Monitoring

You can use perfmon, or other performance monitoring tool, to monitor and display RDMA counters and statistics. Refer to Microsoft documentation for more details. Use the Register-IntelEthernetRDMACounterSet cmdlet to register the RDMA statistics counters for the specific device with perfmon. Refer to Configuring Features with Windows PowerShell* for more information about how to install and use Intel cmdlets. You can use the following PowerShell command to register the RDMA statistics for all supported devices:

Register-IntelEthernetRDMACounterSet

You can use the following PowerShell cmdlet to unregister the RDMA statistics:

Unregister-IntelEthernetRDMACounterSet