Intel® Core™ Ultra Processor

Datasheet, Volume 1 of 2
Supporting Intel® Core™ Ultra Processor for U/H-series Platforms, formerly known as Meteor Lake

ID Date Version Classification
792044 12/15/2023 Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Intel® Neural Processing Unit (Intel® NPU)

The NPU IP in the Intel® Core™ Ultra Processor configuration is a Deep Learning accelerator enumerated to a host processor as an integrated PCIe device. It delivers the cutting-edge processing throughput required to satisfy the demands of Deep Learning applications. The NPU technology is applicable to personal computing devices such as tablets and PCs as a way to encourage AI based applications and services on power and performance sensitive platforms.

The functionality of the Intel® NPU is exposed to a Host system (enumerated as a PCIe device) via a base set of registers. These registers provide access to control and data path interfaces and reside in the Host and Processor subsystems of the Intel® NPU. All host communications are consumed by the scheduler of the Intel® NPU, a 32-bit LeonRT micro-controller. The LeonRT manages the command and response queues as well as the runtime management of the IP itself.

The NPU IP Deep Learning capability is provided by two Neural Compute Engine (NCE) Tiles. Both NCE Tiles are managed by the NPU Scheduler. Each Tile includes a configurable number of Multiply Accumulate (MAC) engines, purpose built for Deep Learning workloads, and two Intel® Movidius SHAVE DSP processors for optimal processing of custom deep learning operations.

The Intel® NPU of Intel® Core™ Ultra Processor is configured with 2k MACs per tile totaling 4k MACs across both tiles and 4 MB of associated near compute memory.