News Posts matching #Geekbench AI

Return to Keyword Browsing

AMD Ryzen AI Max 390 "Strix Halo" Surfaces in Geekbench AI Benchmark

In case you missed it, AMD's new madcap enthusiast silicon engineering effort, the "Strix Halo," is real, and comes with the Ryzen AI Max 300 series branding. These are chiplet-based mobile processors with one or two "Zen 5" CCDs—same ones found in "Granite Ridge" desktop processors—paired with a large SoC die that has an oversized iGPU. This arrangement lets AMD give the processor up to 16 full-sized "Zen 5" CPU cores, and an iGPU with as many as 40 RDNA 3.5 compute units (2,560 stream processors), and a 256-bit LPDDR5/x memory interface for UMA.

"Strix Halo" is designed for ultraportable gaming notebooks or mobile workstations where low PCB footprint is of the essence, and discrete GPU is not an option. For enthusiast gaming notebooks with discrete GPUs, AMD is designing the "Fire Range" processor, which is essentially a mobile BGA version of "Granite Ridge," and a successor to the Ryzen 7045 series "Dragon Range." The Ryzen AI Max series has three models based on CPU and iGPU CU counts—the Ryzen AI Max 395+ (16-core/32-thread with 40 CU), the Ryzen AI Max 390 (12-core/24-thread with 40 CU), and the Ryzen AI Max 385 (8-core/16-thread, 32 CU). An alleged Ryzen AI Max 390 engineering sample surfaced on the Geekbench AI benchmark online database.

Geekbench AI Hits 1.0 Release: CPUs, GPUs, and NPUs Finally Get AI Benchmarking Solution

Primate Labs, the developer behind the popular Geekbench benchmarking suite, has launched Geekbench AI—a comprehensive benchmark tool designed to measure the artificial intelligence capabilities of various devices. Geekbench AI, previously known as Geekbench ML during its preview phase, has now reached version 1.0. The benchmark is available on multiple operating systems, including Windows, Linux, macOS, Android, and iOS, making it accessible to many users and developers. One of Geekbench AI's key features is its multifaceted approach to scoring. The benchmark utilizes three distinct precision levels: single-precision, half-precision, and quantized data. This evaluation aims to provide a more accurate representation of AI performance across different hardware designs.

In addition to speed, Geekbench AI places a strong emphasis on accuracy. The benchmark assesses how closely each test's output matches the expected results, offering insights into the trade-offs between performance and precision. The release of Geekbench AI 1.0 brings support for new frameworks, including OpenVINO, ONNX, and Qualcomm QNN, expanding its compatibility across various platforms. Primate Labs has also implemented measures to ensure fair comparisons, such as enforcing minimum runtime durations for each workload. The company noted that Samsung and NVIDIA are already utilizing the software to measure their chip performance in-house, showing that adoption is already strong. While the benchmark provides valuable insights, real-world AI applications are still limited, and reliance on a few benchmarks may paint a partial picture. Nevertheless, Geekbench AI represents a significant step forward in standardizing AI performance measurement, potentially influencing future consumer choices in the AI-driven tech market. Results from the benchmark runs can be seen here.
Return to Keyword Browsing
Sep 27th, 2024 00:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts