Won't things like GPUs need to have ARM-specific drivers? We have one ARM-based desktop with standard PCIe expansion slots that I know of, the Mac Pro. Unlike the x86 Mac Pro that it replaced, it doesn't support standard GPUs, and many other kinds of PCIe cards are not compatible on the ARM-Mac versus the x86 Mac. I don't know the ins-and-outs of hardware level drivers, but wouldn't WOA desktops have a similar problem?
And yeah, I don't know that NVIDIA needs to go full-custom. They could pull the architecture off the shelf and probably get more out of it by using advanced nodes like Apple does. It sure seems like they could easily answer Snapdragon if they wanted to, and now there's a window of opportunity for such devices. It makes me wonder if MS hasn't already asked NVIDIA, and NVIDIA wasn't interested. Or maybe MS didn't want to deal with NVIDIA, I dunno.
Exactly: NVIDIA will need to produce / develop / test / ship ARM64 WoA drivers for their GPUs, which they have never done. Presumably, if they are making "AI PCs" as Jensen alludes to, they'll need to port their drivers to WoA.
Apps can be emulated, but drivers really do need to be native.
NVIDIA has GPU drivers for Linux on Arm, but not Windows on Arm.
Many Arm-based systems have PCIe (e.g., datacenter), so it's not a hardware limitation (e.g., PCIe is much more abstracted vs the CPU ISA). The Ampere Altra desktop is also Arm-based with PCIe expansion. Interestingly,
this may the system Linus Torvalds now uses.
//
First of all that's just an estimate, it's also missing
FP numbers so barely half the story.
Meanwhile in the real world we have ~
View attachment 348510
www.phoronix.com
openbenchmarking.org
It's easy to forget how bandwidth starved regular zen4 chips are, I think I saw that analysis on Chips & Cheese. With more memory channels &/or higher speed memory they easily pull way past Grace Hopper & Emerald (Sapphire?) Rapids as well. This is why Strix point & Halo would be interesting to watch & whether AMD can at least feed zen5 better on desktops/mobile platforms!
It's easy to forget that
most SPEC testing is an "estimate".
We shouldn't worry: plenty non-SPEC benchmarks are
far less reliable than a well-done SPEC estimate. Very few people submit their benchmark + methodology for independent validation for a validated SPEC score.
You seem to not understand the actual parameters of "the real world": first, Grace uses Cortex-X3-based (Neoverse V2) cores, so this comparison is moot: NVIDIA is rumored to use the Cortex-X
5. Second, much of Phoronix's testing is heavily nT, so the significantly-higher-core-count 7995WX (96-cores) is also rather irrelevant, especially with the next point. Third, the 7980X and 7995WX have 350W TDPs (
and consume about that); without actual & comparable data on the GH200, this is not an interesting comparison when power draw is a key limiting factor in consumer SoCs. Fourth, Phoronix notes many times some of their Linux benchmarks in these tests weren't optimized for AArch64 yet, so it is not much to stand on.
In the end, it's a nonsense comparison: this rumor isn't saying NVIDIA isn't trying to replace Zen4 workstations with GH200. NVIDIA is claimed to be making
consumer APUs for Windows on Arm. Linux perf, enterprise workloads, developer workloads, scientific workloads, 300W+ TDP perf, nT performance beyond 8-12 cores: all
irrelevant here. SPEC was a much better estimate, even with only int, IMO.
But, if we want to measure a current Arm uArches vs Zen4 on 100% native code, fp & int, phones vs desktops, etc. Geekbench is the last man standing. The Cortex-X4 does fine and it's more than enough for Windows on Arm & consumer workloads, even if it's a generation behind what NVIDIA will ship: it is only available on phones, so you won't get much reliable cross-platform data.
1T Cortex-X4 smartphone:
2,287 pts - 100%
1T 7995WX workstation:
2,720 pts (or
2,702 pts) - 119%
It's a good thing AMD uses
Geekbench for CPU perf estimates on consumer workloads, so I can happily avoid all the usual disclaimers. We'll have to see how Cortex-X5 lands, but I don't think NVIDIA's value prop. depends on "fastest 1T CPU ever for WoA": it just needs to be good enough versus Intel & AMD in 2025.
//
TL;DR: We were discussing uArches for a future NVIDIA SoC on Windows for consumers, which Phoronix is miles away from capturing.