Intel® Core™ Ultra 200S and 200HX Series Processors

Datasheet, Volume 1 of 2

ID Date Version Classification
832586 12/02/2025 Public
Document Table of Contents
LAM

Intel® Neural Processing Unit (Intel® NPU)

The NPU IP in the Intel® Core™ Ultra 200S and 200HX Series Processors configuration is a Deep Learning accelerator enumerated to a host processor as an integrated PCIe device. It delivers the cutting-edge processing throughput required to satisfy the demands of Deep Learning applications. The NPU technology is applicable to personal computing devices such as tablets and PCs as a way to encourage AI based applications and services on power and performance sensitive platforms.

The functionality of the Intel® NPU is exposed to a Host system (enumerated as a PCIe device) via a base set of registers. These registers provide access to control and data path interfaces and reside in the Host and Processor subsystems of the Intel® NPU. All host communications are consumed by the scheduler of the Intel® NPU, a 32-bit LeonRT micro-controller. The LeonRT manages the command and response queues as well as the runtime management of the IP itself.

The NPU IP Deep Learning capability is provided by two Neural Compute Engine (NCE) Tiles. Both NCE Tiles are managed by the NPU Scheduler. Each Tile includes a configurable number of Multiply Accumulate (MAC) engines, purpose built for Deep Learning workloads, and two Intel® Movidius SHAVE DSP processors for optimal processing of custom deep learning operations.

The Intel® NPU of Intel® Core™ Ultra 200S and 200HX Series Processors is configured with 2k MACs per tile totaling 4k MACs across both tiles and 4 MB of associated near compute memory.

The NPU plugin supports the following data types as inference precision of internal primitives: INT8(I8/U8), FP16.