Performance Index

ID Date Classification
615781 12/05/2024 Public
Document Table of Contents

Intel® Core™ Ultra Processors

Series Use Case Claim Processor Systems Measured Measurement Measurement Period
H H265 decode performance Intel® Core™ Ultra 7 processor 155H is up to 3.4X faster than NVIDIA Jetson AGX Orin in media performance and up to 3.7X media performance/watt Intel® Core™ Ultra 7 processor 155H System 1: Intel® Core™ Ultra 7 155H CPU: 16 cores GPU: Built-in Intel Arc GPU with 8 Xe cores NPU: Intel AI Boost Memory: 2x16GB DDR5-5600 Storage: 1024 GB SSD OS: Ubuntu 22.04.4 LTS System 2: Jetson Orin AGX-64GB CPU: 12-core Arm Cortex-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3 @ 2.2Ghz GPU: NVIDIA Ampere architecture with 2048 NVIDIA® CUDA® cores and 64 Tensor cores @ 1.3Ghz Memory: 64GB 256-bit LPDDR5 @ 204.8 GB/s Storage: 64GB eMMC 5.1 OS: Ubuntu 22.04.4 LTS System 1: OS BKC: MTL (Ubuntu 22.04.4 LTS w/ Kernel 6.7.10) Workload & version: OpenVINO 2024.2.0 [From Archives, Linux] & DLStreamer 2024.1.0 Compiler: GCC 11.4.0 CPU Plugin Version: 2024.2.0-15519-5c0f38f83f6-releases/2024/2 GPU, NPU Plugin Version: 2024.2.0-15519-5c0f38f83f6-releases/2024/2 NPU Driver: Public V1.5.0 OpenCL Compute Runtime Version: 2024.18.6.0.02 Target IP : Media IP (2x VD-Box) System 2: OS BKC: Orin (Ubuntu 20.04.05 LTS (5.10), Ubuntu 22.04.4 LTS (5.15.136-tegra)) Workload & version: Jetpack 5.0.2, Jetpack 6.0 Rev 2 Compiler: GCC 11.4.0 CUDA Version: 11.4.x, 12.2 Target IP : Media IP Test cases: HEVC 1080p30 - 2mbps HEVC 4k30 - 10mbps As of July 2024
Video analytics end-to-end AI pipeline performance Intel® Core™ Ultra 7 processor 155H is up to 2.7X faster than NVIDIA Jetson AGX Orin in video analytics end-to-end AI pipeline performance and up to 2.9X E2E performance/watt Intel® Core™ Ultra 7 processor 155H System 1: Intel® Core™ Ultra 7 155H CPU: 16 cores GPU: Built-in Intel Arc GPU with 8 Xe cores NPU: Intel AI Boost Memory: 2x16GB DDR5-5600 Storage: 1024 GB SSD OS: Ubuntu 22.04.4 LTS System 2: Jetson Orin AGX-64GB CPU: 12-core Arm Cortex-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3 @ 2.2Ghz GPU: NVIDIA Ampere architecture with 2048 NVIDIA® CUDA® cores and 64 Tensor cores @ 1.3Ghz Memory: 64GB 256-bit LPDDR5 @ 204.8 GB/s Storage: 64GB eMMC 5.1 OS: Ubuntu 22.04.4 LTS System 1: OS BKC: MTL (Ubuntu 22.04.4 LTS w/ Kernel 6.7.10) Workload & version: OpenVINO 2024.2.0 [From Archives, Linux] & DLStreamer 2024.1.0 Compiler: GCC 11.4.0 CPU Plugin Version: 2024.2.0-15519-5c0f38f83f6-releases/2024/2 GPU, NPU Plugin Version: 2024.2.0-15519-5c0f38f83f6-releases/2024/2 NPU Driver: Public V1.5.0 OpenCL Compute Runtime Version: 2024.18.6.0.02 Target IPs: Media, iGPU, NPU System 2: OS BKC: Orin (Ubuntu 20.04.05 LTS (5.10), Ubuntu 22.04.4 LTS (5.15.136-tegra)) Workload & version: Jetpack 5.0.2, Jetpack 6.0 Rev 2 Compiler: GCC 11.4.0 CUDA Version: 11.4.x, 12.2 Target IPs: Media, TC, 2xDLA Test Cases: end-to-end AI pipeline (WL1): Media decode + pre-processing + Detection (YoloV5s_​640x640 @ 10fps) + Tracking + pre-processing + 2 Classification models (Resnet50 @10 inf/s + Mobilenet-V2 @ 10inf/s) end-to-end AI pipeline (WL2): Media decode + pre-processing + Detection (YoloV5s_​640x640 @ 5fps) + Tracking + pre-processing + 2 Classification models (Resnet50 @5 inf/s + Mobilenet-V2 @ 5inf/s) Data was collected at different batch sizes for input videos (1080 or 4k) and max performance was listed for each system. measured KPIs are based on the number of streams and # streams/watt As of July 2024
Video analytics end-to-end AI pipeline performance Intel® Core™ Ultra 7 processor 155H is up to 8.3X better performance/Watt/$ than NVIDIA Jetson AGX Orin in video analytics end-to-end AI pipeline performance Intel® Core™ Ultra 7 processor 155H System 1: Intel® Core™ Ultra 7 155H CPU: 16 cores GPU: Built-in Intel Arc GPU with 8 Xe cores NPU: Intel AI Boost Memory: 2x16GB DDR5-5600 Storage: 1024 GB SSD OS: Ubuntu 22.04.4 LTS System 2: Jetson Orin AGX-64GB CPU: 12-core Arm Cortex-A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3 @ 2.2Ghz GPU: NVIDIA Ampere architecture with 2048 NVIDIA® CUDA® cores and 64 Tensor cores @ 1.3Ghz Memory: 64GB 256-bit LPDDR5 @ 204.8 GB/s Storage: 64GB eMMC 5.1 OS: Ubuntu 22.04.4 LTS System 1: OS BKC: MTL (Ubuntu 22.04.4 LTS w/ Kernel 6.7.10) Workload & version: OpenVINO 2024.2.0 [From Archives, Linux] & DLStreamer 2024.1.0 Compiler: GCC 11.4.0 CPU Plugin Version: 2024.2.0-15519-5c0f38f83f6-releases/2024/2 GPU, NPU Plugin Version: 2024.2.0-15519-5c0f38f83f6-releases/2024/2 NPU Driver: Public V1.5.0 OpenCL Compute Runtime Version: 2024.18.6.0.02 Target IPs: Media, iGPU, NPU System 2: OS BKC: Orin (Ubuntu 20.04.05 LTS (5.10), Ubuntu 22.04.4 LTS (5.15.136-tegra)) Workload & version: Jetpack 5.0.2, Jetpack 6.0 Rev 2 Compiler: GCC 11.4.0 CUDA Version: 11.4.x, 12.2 Target IPs: Media, TC, 2xDLA Test Cases: end-to-end AI pipeline (WL1): Media decode + pre-processing + Detection (YoloV5s_​640x640 @ 10fps) + Tracking + pre-processing + 2 Classification models (Resnet50 @10 inf/s + Mobilenet-V2 @ 10inf/s) end-to-end AI pipeline (WL2): Media decode + pre-processing + Detection (YoloV5s_​640x640 @ 5fps) + Tracking + pre-processing + 2 Classification models (Resnet50 @5 inf/s + Mobilenet-V2 @ 5inf/s) Data was collected at different batch sizes for input videos (1080 or 4k) and max performance was listed for each system. measured KPIs are based on the number of streams, # streams/watt, #streams/watt/$ As of July 2024
P Performance OpenVINO 2024.1 has up to 2x performance gain over OV-2023.3 LTS for Llama-2-7b-chat on Intel® Core™ Ultra 7-165H built-in GPU. Intel® Core™ Ultra 7-165H Processor: Intel® Core™ Ultra 7-165H. Core count: 6P, 8E, 2e, 22 threads. HT: on. Turbo: on. Built-in GPU: Arc™ 128 EU. Memory: 2x32GB DDR5 @5600MT/s. Disk: Samsung 128 GB SSD. OS: Win11 build 10.0.22631 Build 22631. PL-1: 28W Results from OpenVINO™ 2024.1 compared to OpenVINO™ 2023.3 LTS using OV-2023.3 LTS as baseline (1.0). Data pulled from OpenVINO public benchmark web page(s): https://docs.openvino.ai/2024/home.html Precision used: INT4 quantization of Huggingface https://huggingface.co/meta-llama/Llama-2-7b-chat-hf OV-2023.3 LTS: Feb 2024, OV-2024.1: April 2024
P Performance OpenVINO 2024.1 has up to 1.2x performance gain over OV-2023.3 LTS for Stable-Diffusion-V2-1 on Intel® Core™ Ultra 7-165H built-in GPU. Intel® Core™ Ultra 7-165H Processor: Intel® Core™ Ultra 7-165H. Core count: 6P, 8E, 2e, 22 threads. HT: on. Turbo: on. Built-in GPU: Arc™ 128 EU. Memory: 2x32GB DDR5 @5600MT/s. Disk: Samsung 128 GB SSD. OS: Win11 build 10.0.22631 Build 22631. PL-1: 28W Results from OpenVINO™ 2024.1 compared to OpenVINO™ 2023.3 LTS using OV-2023.3 LTS as baseline (1.0). Data pulled from OpenVINO public benchmark web page(s): https://docs.openvino.ai/2024/home.html Precision used: INT8 quantization of Huggingface https://huggingface.co/stabilityai/stable-diffusion-2-1 OV-2023.3 LTS: Feb 2024, OV-2024.1: April 2024
PS Performance The Intel® Core™ Ultra 7 processor 165HL is up to 5.02 times faster in GPU image classification inference performance than S series Intel® Core™ i7-14700 PL1=65W. Intel® Core™ Ultra 7 processor 165 HL Processor: Intel® Core™ Ultra 7 165HL, 16C22T, PL1=45W Turbo up to 5.0 GHz Memory: DDR5 64 GB, Disk: Samsung SSD 990 PRO 1TB, Operating System: Microsoft Windows 11 Enterprise with power option balanced. Processor: Intel® Core™ i7 processor 14700, 20C28T, PL1=65W Turbo up to 5.4 GHz Memory DDR5 64 GB, Disk: Samsung SSD 970 EVO PLUS 1TB, Operating System: Microsoft Windows 11 Enterprise with power option balanced. As measured by Resnet50-TF (OpenVino 2023.3) int8, BS8 on GPU in AC Balanced performance mode comparing Intel® Core™ Ultra 7 165HL PL1 = 45W with 14th Gen Intel® Core™ i7 14700. Benchmark and compare the Al inference performance of CPUs, GPUs and Al accelerators in Windows* devices using common APIs and inference engines. Use this test to compare which API and hardware configuration provides the best Al performance for your Windows* device in common machine vision tasks. Performance measurements are based on the testing as of 03/06/2024 and may not reflect all publicly available security updates. refer configuration disclosure for details. No product can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. For more information go to http://www.intel.com/benchmarks. Resnnet50-TF info https://docs.openvino.ai/2023.3/omz_models_model_resnet_50_tf.html. As of March 2024
PS Performance The Intel® Core™ Ultra165HL 45W is expected up to 3.85 times faster in GPU object detection inference performance than S series Intel® Core™ i7-14700 PL1=65W. Intel® Core™ Ultra 7 - 165 HL Processor: Intel® Core™ Ultra 7 165HL, 16C22T, PL1=45W Turbo up to 5.0 GHz Memory: DDR5 64 GB, Disk: Samsung SSD 990 PRO 1TB, Operating System: Microsoft Windows 11 Enterprise with power option balanced. Processor: Intel® Core™ i7 processor 14700, 20C28T, PL1=65W Turbo up to 5.4 GHz Memory DDR5 64 GB, Disk: Samsung SSD 970 EVO PLUS 1TB, Operating System: Microsoft Windows 11 Enterprise with power option balanced. As measured by mobilenet-ssd (OpenVino 2023.3) int8, BS8 on GPU in AC Balanced performance mode comparing Intel® Core™ Ultra 7 165HL PL1 = 45W with 14th Gen Intel® Core™ i7 14700. Benchmark and compare the Al inference performance of CPUs, GPUs and Al accelerators in Windows* devices using common APIs and inference engines. Use this test to compare which API and hardware configuration provides the best Al performance for your Windows* device in common machine vision tasks. Performance measurements are based on the testing as of 03/06/2024 and may not reflect all publicly available security updates. refer configuration disclosure for details. No product can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. For more information go to http://www.intel.com/benchmarks. mobilenet-ssd info https://docs.openvino.ai/2023.3/omz_models_model_mobilenet_ssd.html As of March 2024
PS Performance The Intel® Core™ Ultra165HL 45W is expected to deliver up to 3.13 times faster Graphics performance than S series Intel® Core™ i7-14700 PL1=65W. Intel® Core™ Ultra 7 - 165 HL Processor: Intel® Core™ Ultra 7 165HL, 16C22T, PL1=45W Turbo up to 5.0 GHz Memory: DDR5 64 GB, Disk: Samsung SSD 990 PRO 1TB, Operating System: Microsoft Windows 11 Enterprise with power option balanced. Processor: Intel® Core™ i7 processor 14700, 20C28T, PL1=65W Turbo up to 5.4 GHz Memory DDR5 64 GB, Disk: Samsung SSD 970 EVO PLUS 1TB, Operating System: Microsoft Windows 11 Enterprise with power option balanced. As measured by 3DMark - Fire Strike on GPU in AC Balanced performance mode comparing Intel® Core™ Ultra 7 165HL PL1 = 45W with 14th Gen Intel® Core™ i7 14700. Benchmark and compare the Al inference performance of CPUs, GPUs and Al accelerators in Windows* devices using common APIs and inference engines. Use this test to compare which API and hardware configuration provides the best Al performance for your Windows* device in common machine vision tasks. Performance measurements are based on the testing as of 03/06/2024 and may not reflect all publicly available security updates. refer configuration disclosure for details. No product can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. For more information go to http://www.intel.com/benchmarks. As of March 2024
H Performance Intel® Core™ Ultra 7 processor 165H is up to 1.5 times faster in GPU AI inference performance than prior generation. Intel® Core™ Ultra 7 processor 165H Processor: Intel® Core™ Ultra 7 165H, 14C22T PL1=28W Turbo up to 5.0 GHz Memory: LPDDR5-6000 2x16GB, Dual Rank Storage: 512GB Samsung PM9A1 NVMe(CPU Attached) Operating System: Windows 11 22H2 (OS Build 22621.608) with VBS, Defender, Tamper Protection enabled for power and performance benchmarks. Graphics Driver: 31.0.101.3688 (Power KPIs), 31.0.101.4575 (Performance KPIs) Processor: Intel® Core™ i7-1370P PL1=28W TDP, 14C20T Turbo up to 5.2 GHz Memory: LPDDR5-7467 2x16GB, Dual Rank Storage: 512GB Samsung PM9A1 NVMe(SoC Attached) Operating System: Windows 11 with VBS, Defender, Tamper Protection enabled for power and performance benchmarks. Graphics Driver: 31.0.101.4725 As measured by Procyon AI Inference (Open Vino) Float 32 on GPU in AC Best Performance Mode comparing Intel® Core™ Ultra 7 165H PL1=28W vs. 13th Gen Intel Core processors, Intel® Core™ i7-1370P. Al Inference Benchmark for Windows* - Benchmark and compare the Al inference performance of CPUs, GPUs and Al accelerators in Windows* devices using common APIs and inference engines. - Use this test to compare which API and hardware configuration provides the best Al performance for your Windows* device in common machine vision tasks. Performance measurements may not reflect all publicly available security updates. No product can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. For more information go to http://www.intel.com/benchmarks Benchmark: UL Procyon® AI Inference Benchmark for Windows gives insights into how AI inference engines perform on your hardware in a Windows environment, helping you decide which engines to support to achieve the best performance. The benchmark features several AI inference engines from different vendors, with benchmark scores reflecting the performance of on-device inferencing operations. As of November, 2023
H Performance Intel® Core™ Ultra 7 processor 165H is up to 1.81 times faster in Graphics performance than prior generation. Intel® Core™ Ultra 7 processor 165H Processor: Intel® Core™ Ultra 7 165H, 14C22T PL1 = 28W Turbo up to 5.0 GHz Memory: LPDDR5-6000 2x16GB, Dual Rank Storage: 512GB Samsung PM9A1 NVMe(CPU Attached) Operating System: Windows 11 22H2 (OS Build 22621.608) with VBS, Defender, Tamper Protection enabled for power and performance benchmarks. Graphics Driver: 31.0.101.3688 (Power KPIs), 31.0.101.4575 (Performance KPIs) Processor: Intel® Core™ i7-1370P PL1=28W TDP, 14C20T Turbo up to 5.2 GHz Memory: LPDDR5-7467 2x16GB, Dual Rank Storage: 512GB Samsung PM9A1 NVMe(SoC Attached) Operating System: Windows 11 with VBS, Defender, Tamper Protection enabled for power and performance benchmarks. Graphics Driver: 31.0.101.4725 As measured by 3DMark TimeSpy (Graphics Score) using Graphics Driver: 31.0.101.3688 (Power KPIs), 31.0.101.4575 (Performance KPIs) comparing Intel® Core™ Ultra 7 165H PL1=28W vs. 13th Gen Intel Core processors, Intel® Core™ i7-1370P. Benchmark: 3DMark* 3DMark* is a benchmark from Futuremark* that measures, DX 10 , DX 11 and DX 12 gaming performance. Used "Time Spy" for DX 12 graphics. As of November, 2023
H Performance Intel® Core™ Ultra 7 processor 165H 28W on NPU is expected to deliver up to 2.56 times the AI Performance/ Watt of previous generation P Series Intel® Core™ i7-1370P PL1=28W on GPU as measured by Procyon AI Inference (Open Vino) Int8 in AC Best Performance Mode Intel® Core™ Ultra 7 processor 165H Processor: Intel® Core™ Ultra 7 165H, 14C22T PL1=28W Turbo up to 5.0 GHz Memory: LPDDR5-6000 2x16GB, Dual Rank Storage: 512GB Samsung PM9A1 NVMe(CPU Attached) Operating System: Windows 11 22H2 (OS Build 22621.608) with VBS, Defender, Tamper Protection enabled for power and performance benchmarks. Graphics Driver: 31.0.101.3688 (Power KPIs), 31.0.101.4575 (Performance KPIs) Processor: Intel® Core™ i7-1370P PL1=28W TDP, 14C20T Turbo up to 5.2 GHz Memory: LPDDR5-7467 2x16GB, Dual Rank Storage: 512GB Samsung PM9A1 NVMe(SoC Attached) Operating System: Windows 11 with VBS, Defender, Tamper Protection enabled for power and performance benchmarks. Graphics Driver: 31.0.101.4725 As measured by Procyon AI Inference (Open Vino) Int8 in AC Best Performance Mode comparing Intel® Core™ Ultra 7 165H PL1=28W vs. 13th Gen Intel Core processors, Intel® Core™ i7-1370P. Al Inference Benchmark for Windows* - Benchmark and compare the Al inference performance of CPUs, GPUs and Al accelerators in Windows* devices using common APIs and inference engines. - Use this test to compare which API and hardware configuration provides the best Al performance for your Windows* device in common machine vision tasks. Performance measurements may not reflect all publicly available security updates. No product can be absolutely secure. Refer to appendix for workload and configuration details . Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. For more information go to http://www.intel.com/benchmarks Benchmark: UL Procyon® AI Inference Benchmark for Windows gives insights into how AI inference engines perform on your hardware in a Windows environment, helping you decide which engines to support to achieve the best performance. The benchmark features several AI inference engines from different vendors, with benchmark scores reflecting the performance of on-device inferencing operations. As of November, 2023