Performance Index

ID Date Classification
615781 04/16/2026 Public
Document Table of Contents

Intel® Core™ Processors (Series 3)

Series Use Case Claim Processor Systems Measured Measurement Measurement Period
300 Edge Video Analytics Workload The Intel® Core™ 7 350 is up to 2.2x faster in end to end video analytics than NVIDIA Jetson Orin™ Nano Intel® Core™ 7 350 processor System 1: Intel® Core™ 7 350, Test Date: April 2026, OEM / System: Intel Corporation Wildcat Lake Client Platform, 0.1, Model: Intel Corporation WCL DDR5 Embedded CRB, CPU: Intel(R) Core(TM) 7 350, Cores/Threads: 6, L3 Cache: 6 MB, TDP: 15W, Intel Turbo Boost: Enabled, Base Frequency: 1.5 GHz, Maximum Frequency: 4.8 GHz, Memory: 32GB (1x32GB DDR5 6400MT/s [6400MT/s]), Storage: 1x 465.8G Sabrent SB-RKT4P-500, OS : WCLPFWI1.R00.3515.D50.2601121824, BIOS: Ubuntu 24.04.4 LTS, Microcode: Ubuntu 24.04.4 LTS, Kernel: 6.18-intel, Power Plan: Balanced Performance (6), Scaling Governor: powersave, Scaling Driver: intel_​pstate, C-states: POLL: Enabled, C1_​ACPI: Enabled, C2_​ACPI: Enabled, C3_​ACPI: Enabled, Graphics Driver Version (GPU): 26.01.36711.4, NPU Driver Version: 1.33.0.20260320, OpenVINO Version: E2E Tests: 2026.0.0Inference: 2025.3.0, DLStreamer Version: E2E Tests: 2026.0.0Inference: 2025.2.0"

System 2: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super (8GB), Test Date: Feb 2026, OEM / System: NVIDIA Jetson Orin Nano, CPU: 6-core Arm Cortex-A78AE v8.2 64-bit @ up to 1.73 GHz, Cores/Threads: 6-core, Module Power: 7-25W, GPU: NVIDIA Ampere architecture with 1024 NVIDIA® CUDA® cores and 32 Tensor cores , Accelerator: , Memory: 8GB 128-bit LPDDR5 @ 68.3 GB/s (2133 MHz), Storage: 512GB NVMe SSD, OS : Ubuntu 22.04.5 LTS, Kernel: 5.15.148-tegra, Power Plan: , Jetpack: 6.2.1, Deepstream: 7.1, CUDA: 12.6.68, TensorRT: 10.3.0.30, Jetson Clocks: schedutil (dynamic), NVP Modes: Currently: 0 (15W) Available: 0-15W, 1-25W, 2-MAXN_​SUPER, 3-7W

Intel® Core™ 7 350 = 18 streams (1080p30) NVIDIA Jetson Orin™ Nano 8GB = 8 streams (1080p30)

As measured by video streams at 1080p30, TDP = 15W. Medium AI Pipeline: Media Decode (1080p30 HEVC) + Preprocessing + Yolov5m_​640x640 @ 10fps + Tracking + Resnet-50 @ 10 ips

Reference workload available on Github https://github.com/open-edge-platform/edge-workloads-and-benchmarks

April 2026
300 AI Inference Intel® Core™ 7 350 is up to 1.9x faster image classification than NVIDIA Jetson Orin™ Nano Intel® Core™ 7 350 processor same as above Intel® Core™ 7 350 GPU, mobilenet-v2, INT8, BS8 = 3849 inferences per second

NVIDIA Jetson Orin™ Nano 8GB GPU, mobilenet-v2, INT8, BS8 = 2027 inferences per second TDP = 15W

Performance varies by use, configuration and other factors.

April 2026
300 AI Inference Intel® Core™ 7 350 is up to 1.5x faster object detection than NVIDIA Jetson Orin™ Nano Intel® Core™ 7 350 processor same as above Intel® Core™ 7 350 GPU, yolo_​v5m, INT8, BS8 = 154 inferences per second

NVIDIA Jetson Orin™ Nano 8GB, GPU yolo_​v5m, INT8, BS8 = 103 inferences per second TDP = 15W

Performance varies by use, configuration and other factors.

April 2026
300 Edge Video Analytics Workload Intel® Core™ 7 350 is up to 4.3x faster in end to end video analytics than Intel® Core™ 7 150U Intel® Core™ 7 350 processor System 1: Intel® Core™ 7 350, Test Date: April 2026, OEM / System: Intel Corporation Wildcat Lake Client Platform, 0.1, Model: Intel Corporation WCL DDR5 Embedded CRB, CPU: Intel(R) Core(TM) 7 350, Cores/Threads: 6, L3 Cache: 6 MB, TDP: 15W, Intel Turbo Boost: Enabled, Base Frequency: 1.5 GHz, Maximum Frequency: 4.8 GHz, Memory: 32GB (1x32GB DDR5 6400MT/s [6400MT/s]), Storage: 1x 465.8G Sabrent SB-RKT4P-500, OS : WCLPFWI1.R00.3515.D50.2601121824, BIOS: Ubuntu 24.04.4 LTS, Microcode: Ubuntu 24.04.4 LTS, Kernel: 6.18-intel, Power Plan: Balanced Performance (6), Scaling Governor: powersave, Scaling Driver: intel_​pstate, C-states: POLL: Enabled, C1_​ACPI: Enabled, C2_​ACPI: Enabled, C3_​ACPI: Enabled, Graphics Driver Version (GPU): 26.01.36711.4, NPU Driver Version: 1.33.0.20260320, OpenVINO Version: E2E Tests: 2026.0.0Inference: 2025.3.0, DLStreamer Version: E2E Tests: 2026.0.0Inference: 2025.2.0"

System 2: Intel® Core™ 150U/250U, Test Date: December 2025, OEM / System: Intel Corporation Raptor Lake Client Platform, Model: -, CPU: Intel(R) Core(TM) 7 250U, Cores/Threads: 10, L3 Cache: 12 MiB, TDP: 15W, Intel Turbo Boost: Enabled, Base Frequency: 1.8GHz, Maximum Frequency: 1.8GHz, Memory: 32GB (2x16GB DDR5 4800MT/s [4800MT/s]), Storage: 1x 465.8G WDS500G3X0C-00SJG0, OS : RPLIPFI1.R00.5402.A02.2501230626, BIOS: Ubuntu 24.04.3 LTS, Microcode: RPLIPFI1.R00.5402.A02.2501230626, Kernel: 0x4128, Power Plan: 6.16.0-061600-generic, Scaling Governor: Balanced Performance (6), Scaling Driver: powersave, C-states: intel_​pstate, Graphics Driver Version (GPU): POLL: Enabled, C1_​ACPI: Enabled, C2_​ACPI: Enabled, C3_​ACPI: Enabled, NPU Driver Version: 25.35.35096.9, OpenVINO Version: 1.24.0, DLStreamer Version: 2025.3.0.0

Intel® Core™ 7 350 = 18 streams (1080p30)

Intel® Core™ 7 150U/250U = 4 streams (1080p30)

As measured by video streams at 1080p30, TDP = 15W. Medium AI Pipeline: Media Decode (1080p30 HEVC) + Preprocessing + Yolov5m_​640x640 @ 10fps + Tracking + Resnet-50 @ 10 ips

Reference workload available on Github https://github.com/open-edge-platform/edge-workloads-and-benchmarks

April 2026
300 AI Inference The Intel® Core™ 7 350 is up to 2.5x faster image classification than Intel® Core™ 7 150U Intel® Core™ 7 350 processor same as above Intel® Core™ 7 350 GPU, resnet-50, INT8, BS8 = 1016 inferences per second

Intel® Core™ 7 150/250 GPU, resnet-50, INT8, BS8 = 412 inferences per second TDP = 15W

As measured by inferences per second with resnet-50, TDP = 15W using built in GPU.

Performance varies by use, configuration and other factors.

April 2026
300 AI Inference The Intel® Core™ 7 350 is up to 2.8x faster object detection than Intel® Core™ 7 150U Intel® Core™ 7 350 processor same as above Intel® Core™ 7 350 GPU, yolo_​v5m, INT8, BS8 = 154 inferences per second

Intel® Core™ 7 150/250 GPU, yolo_​v5m, INT8, BS8 = 54 inferences per second TDP = 15W

Performance varies by use, configuration and other factors.

April 2026