Wednesday, May 22nd 2024

Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

NVIDIA is eying a comeback to the client processor business, reveals a Bloomberg interview with the CEOs of NVIDIA and Dell. For NVIDIA, all it takes is a simple driver update that exposes every GeForce GPU with tensor cores as an NPU to Windows 11, with translation layers to get popular client AI apps to work with TensorRT. But that would need you to have a discrete NVIDIA GPU. What about the vast market of Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS, which is all that Copilot+ needs? NVIDIA held an Arm license for decades now, and makes Arm-based CPUs to this day, with the NVIDIA Grace, however, that is a large server processor meant for its AI GPU servers.

NVIDIA already made client processors under the Tegra brand targeting smartphones, which it winded down last decade. It's since been making Drive PX processors for its automotive self-driving hardware division; and of course there's Grace. NVIDIA hinted that it might have a client CPU for the AI PC market in 2025. In the interview Bloomberg asked NVIDIA CEO Jensen Huang a pointed question on whether NVIDIA has a place in the AI PC market. Dell CEO Michael Dell, who was also in the interview, interjected "come back next year," to which Jensen affirmed "exactly." Dell would be in a front-and-center position to know if NVIDIA is working on a new PC processor for launch in 2025, and Jensen's nod almost confirms this
NVIDIA has both the talent and the IP to whip up a PC processor—its teams behind Grace and Drive can create the Arm CPU cores, NVIDIA is already the big daddy of consumer graphics and should have little problem with the iGPU, and the NPU shouldn't be hard to create, either. It wouldn't surprise us if the NPU on NVIDIA's chip isn't a physical component, but a virtual device that uses the AI acceleration capabilities of the iGPU with its tensor cores, as a hardware backend.

NVIDIA's journey to the AI PC has one little hurdle, and that is the exclusivity Qualcomm enjoys with Microsoft for the current crop of Windows-on-Arm notebooks, with its Snapdragon X series chips. NVIDIA would have to work with Microsoft to have the same market access as Qualcomm.

If all goes well, the NVIDIA PC processor powering AI PCs will launch in 2025.
Sources: Bloomberg (YouTube), Videocardz
Add your own comment

34 Comments on Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

#2
Carillon
If a driver update is all it takes, wouldn't the orin qualify as AI PC?
Also, gg qualcomm for this huge accomplishment
Posted on Reply
#3
TristanX
If NV join PC CPU pack, than Intel and AMD are in serious problems
Posted on Reply
#4
londiste
CarillonIf a driver update is all it takes, wouldn't the orin qualify as AI PC?
Also, gg qualcomm for this huge accomplishment
If someone stuck this to something that resembles a pc? :D
But on a more serious note I bet this would require firmware, drivers and OS support that are rather extensive undertaking if we are talking something like making a Windows laptop out of this.
Posted on Reply
#5
Bwaze
TristanXIf NV join PC CPU pack, than Intel and AMD are in serious problems
But do they really want to? The market is:

"Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS"

But Nvidia clearly outgrew catering to lowly penny pinching peasants - they practically don't offer low end GPUs, and with every generation they delay their lower end offerings more and more. And we can understand why - their server, AI mainframes are what's driving the stellar growth, not home users. Why would all of a sudden they want to deal with market that requires low margins and vast volume?
Posted on Reply
#6
kondamin
it's a mature and shrinking market, fighting for a share where they even stopped bothering with phones while that market was still growing and dealing with arm licenses was still profitable... I don't think so.

If they were to do something with risc V where they can keep more of the pie... that might be something different.
Posted on Reply
#7
windwhirl
Bwazethey practically don't offer low end GPUs,
To be fair, "low-end" mostly moved upwards, due to Intel and AMD offering IGPs with a number of their CPUs, and at least AMD offers IGPs that are moderately capable considering their constraints (I don't know what's the state of things at the Intel side). And with the newer Ryzen CPUs all coming with at least a functional IGP that can handle display output and basic graphic tasks, there's even less need of the low-end cards.
Bwazetheir server, AI mainframes are what's driving the stellar growth, not home users.
Though this is also true, however.
BwazeWhy would all of a sudden they want to deal with market that requires low margins and vast volume?
More CUDA dominance maybe? That's about the only thing that comes to mind.
Posted on Reply
#8
b1k3rdude
All I see is more unproductive members of society with more money then sense, contributing exactly zero back to the world at large beyond lining thier pockets.
Posted on Reply
#9
Vayra86
TristanXIf NV join PC CPU pack, than Intel and AMD are in serious problems
Nah they can't make a good CPU to save their lives, at least not in the consumer space. Tegra failed spectacularly, they are salvaging it in anyway they can. I don't believe Drive is really getting picked up en masse either. The thing that really makes Nvidia excel is its GPU technology. Perhaps they can claw some space into an ARM market for consumer again, but they'll be fighting companies with more experience.
Posted on Reply
#10
hsew
Vayra86Nah they can't make a good CPU to save their lives, at least not in the consumer space. Tegra failed spectacularly, they are salvaging it in anyway they can. I don't believe Drive is really getting picked up en masse either. The thing that really makes Nvidia excel is its GPU technology. Perhaps they can claw some space into an ARM market for consumer again, but they'll be fighting companies with more experience.
Umm no. Tegra powers the Nintendo Switch (and likely Switch 2). Selling 100M+ units is not a failure…
Posted on Reply
#11
Vayra86
hsewUmm no. Tegra powers the Nintendo Switch (and likely Switch 2). Selling 100M+ units is not a failure…
Its also no proof of it being a good CPU. The Switch isn't exactly a powerhouse.
Posted on Reply
#12
hsew
Vayra86It’s also no proof of it being a good CPU. The Switch isn't exactly a powerhouse.
CPU performance isn’t everything you know? The SOC also has to be affordable and practical. Just because it doesn’t have 16 Zen 4 P-cores and a 500W GPU doesn’t automatically make it crap…
Posted on Reply
#13
Darmok N Jalad
Vayra86Its also no proof of it being a good CPU. The Switch isn't exactly a powerhouse.
I dunno, a company with the resources of NVIDIA can probably come up with a good SOC. Up until now, there hasn't been much reason. Now that MS is shouting WOA from the rooftops, there's an actual opportunity. NVIDIA was the first "partner" to get burned by MS's half-baked WOA ambitions, by being the SOC of choice in MS's biggest write-off that I can remember with Windows RT devices. NVIDIA might actually be the best positioned here, since they have the GPU chops and driver experience that Qualcomm doesn't have. I'm still curious to see how well the graphics perform in real life on these X SOCs.
Posted on Reply
#14
ikjadoon
Vayra86Nah they can't make a good CPU to save their lives, at least not in the consumer space. Tegra failed spectacularly, they are salvaging it in anyway they can. I don't believe Drive is really getting picked up en masse either. The thing that really makes Nvidia excel is its GPU technology. Perhaps they can claw some space into an ARM market for consumer again, but they'll be fighting companies with more experience.
NVIDIA likely isn't making a custom uArch again. I expect it'll like NVIDIA Grace: NVIDIA will license Arm's stock cores. Arm's stock cores are pretty speedy these days and more than comparable with AMD's & Intel's latest uArches (Zen4 / Redwood Cove).

Some background on the current Cortex-X4:
www.hwcooling.net/en/arm-unveils-record-breaking-cortex-x4-core-with-eight-alus/

Geekbench 6 puts its 1T perf at 5950X / 7840U or i9-11900 / i3-12100.
SPECint2017 puts the 1T perf at 6900HS / Core Ultra 155H.

And that's all at low 3.2 GHz clocks. NVIDIA will have a lot of strong options even in the X4, tbh. But it's more likely NVIDIA will use the 2025 core, the Cortex-X5, which Arm claims has much higher IPC.

At the moment, any SoC manufacturer can get an ultra-high-performance and high efficiency uArches and need not develop their own by using Arm's Cortex X3, X4, and upcoming X5. It's honestly never been a better time to make a high-end SoC.

Apple has its custom uArches.
Arm has its custom uArches
Intel has its custom uArches.
AMD has its custom uArches.
Qualcomm has its custom uArch.

I personally can't wait to see Arm filter to DIY desktops in the next decade.
Posted on Reply
#15
Carillon
Phoronix tested grace hopper here and apart for the price it's a very good APU.
I don't see why a scaled down version of it couldn't compete with the snapdragon.
Posted on Reply
#16
Darmok N Jalad
ikjadoonNVIDIA likely isn't making a custom uArch again. I expect it'll like NVIDIA Grace: NVIDIA will license Arm's stock cores. Arm's stock cores are pretty speedy these days and more than comparable with AMD's & Intel's latest uArches (Zen4 / Redwood Cove).

Some background on the current Cortex-X4:
www.hwcooling.net/en/arm-unveils-record-breaking-cortex-x4-core-with-eight-alus/

Geekbench 6 puts its 1T perf at 5950X / 7840U or i9-11900 / i3-12100.
SPECint2017 puts the 1T perf at 6900HS / Core Ultra 155H.

And that's all at low 3.2 GHz clocks. NVIDIA will have a lot of strong options even in the X4, tbh. But it's more likely NVIDIA will use the 2025 core, the Cortex-X5, which Arm claims has much higher IPC.

At the moment, any SoC manufacturer can get an ultra-high-performance and high efficiency uArches and need not develop their own by using Arm's Cortex X3, X4, and upcoming X5. It's honestly never been a better time to make a high-end SoC.

Apple has its custom uArches.
Arm has its custom uArches
Intel has its custom uArches.
AMD has its custom uArches.
Qualcomm has its custom uArch.

I personally can't wait to see Arm filter to DIY desktops in the next decade.
Won't things like GPUs need to have ARM-specific drivers? We have one ARM-based desktop with standard PCIe expansion slots that I know of, the Mac Pro. Unlike the x86 Mac Pro that it replaced, it doesn't support standard GPUs, and many other kinds of PCIe cards are not compatible on the ARM-Mac versus the x86 Mac. I don't know the ins-and-outs of hardware level drivers, but wouldn't WOA desktops have a similar problem?

And yeah, I don't know that NVIDIA needs to go full-custom. They could pull the architecture off the shelf and probably get more out of it by using advanced nodes like Apple does. It sure seems like they could easily answer Snapdragon if they wanted to, and now there's a window of opportunity for such devices. It makes me wonder if MS hasn't already asked NVIDIA, and NVIDIA wasn't interested. Or maybe MS didn't want to deal with NVIDIA, I dunno.
Posted on Reply
#17
Eternit
There is a big assumption there will be Qualcomm's success with Windows AI PC. For now it is just a big hype and no one knows how many units will be sold and even if it will be a success for this generation, if it will continue with the future generations.
Posted on Reply
#18
Minus Infinity
Headline is about Nvidia maybe joing ARM alliance, but we actually know AMD is entering the ARM's race with its Soundwave cpu/apu coming out in 2026. AMD is taking the segment seriously and is prioritising the resources for the development. It's about time we got a challenger to Qualcomm, the Intel of the ARM world.
Posted on Reply
#20
R0H1T
ikjadoonGeekbench 6 puts its 1T perf at 5950X / 7840U or i9-11900 / i3-12100.
SPECint2017 puts the 1T perf at 6900HS / Core Ultra 155H.

And that's all at low 3.2 GHz clocks. NVIDIA will have a lot of strong options even in the X4, tbh. But it's more likely NVIDIA will use the 2025 core, the Cortex-X5, which Arm claims has much higher IPC.
First of all that's just an estimate, it's also missing FP numbers so barely half the story.

Meanwhile in the real world we have ~


www.phoronix.com/review/nvidia-gh200-amd-threadripper
openbenchmarking.org/result/2402191-NE-GH200THRE98&sgm=1&ppd_U3lzdGVtNzYgVGhlbGlvIE1ham9yIHI1IC0gVGhyZWFkcmlwcGVyIDc5ODBY=15076&ppd_SFAgWjYgRzUgQSAtIFRocmVhZHJpcHBlciBQUk8gNzk5NVdY=30041&ppd_R1BUc2hvcC5haSAtIE5WSURJQSBHSDIwMA=42500&ppt=D&sor

It's easy to forget how bandwidth starved regular zen4 chips are, I think I saw that analysis on Chips & Cheese. With more memory channels &/or higher speed memory they easily pull way past Grace Hopper & Emerald (Sapphire?) Rapids as well. This is why Strix point & Halo would be interesting to watch & whether AMD can at least feed zen5 better on desktops/mobile platforms!
Posted on Reply
#21
persondb
hsewUmm no. Tegra powers the Nintendo Switch (and likely Switch 2). Selling 100M+ units is not a failure…
That doesn't mean much about the SoC in general. Consoles have different needs from other devices like smartphones and PCs, so what works best for them does not necessarily means that it's also good for the others.

The Tegra X1 was used in a couple other devices(Pixel C) but it end up not seeing wide adoption. It had a cluster of A53 in it but I believe there was a silicon bug that ended up with those not being used(and later removed).

Competitors SoCs from the same period had a cluster of quad core A72 and another quad core cluster of A53, with about the same clocks.

The big advantage that Nvidia had at the time was GPU but that ends up not being as important.
Posted on Reply
#22
Neo_Morpheus
Darmok N JaladIt makes me wonder if MS hasn't already asked NVIDIA, and NVIDIA wasn't interested. Or maybe MS didn't want to deal with NVIDIA, I dunno.
Well, assuming that MS has some dignity, they might be like Apple and refuse to place themselves at Ngreedias mercy again.

Remember how Apple, EVGA, MS, Sony and others tasted a nice fat knife in their backs courtesy of Ngreedia.
Posted on Reply
#23
londiste
Vayra86Its also no proof of it being a good CPU. The Switch isn't exactly a powerhouse.
Switch is also from 2017. With a SoC from 2015 :D
Vayra86Nah they can't make a good CPU to save their lives, at least not in the consumer space. Tegra failed spectacularly, they are salvaging it in anyway they can. I don't believe Drive is really getting picked up en masse either. The thing that really makes Nvidia excel is its GPU technology. Perhaps they can claw some space into an ARM market for consumer again, but they'll be fighting companies with more experience.
They don't need to. They are relying on ARM for CPU cores for now.
Also, Denver was pretty good back when Nvidia was trying to cook their own. Pretty sure they have the know-how.
Tegra did not fail spectacularly, it kind of slid out of our view. They pivoted from consumer stuff to automotive and industrial. Most likely due to profit margins.
Posted on Reply
#24
AusWolf
For the vote: if Jensen wants a monopoly in the AI business, and it takes making CPUs with Tensor cores, then he'll do it. They've got the money, so why not?
Posted on Reply
#25
Vayra86
EternitThere is a big assumption there will be Qualcomm's success with Windows AI PC. For now it is just a big hype and no one knows how many units will be sold and even if it will be a success for this generation, if it will continue with the future generations.
Well if they start peddling 450 dollar keyboards it won't go places fast lol
hsewCPU performance isn’t everything you know? The SOC also has to be affordable and practical. Just because it doesn’t have 16 Zen 4 P-cores and a 500W GPU doesn’t automatically make it crap…
For?

We already have a whole landscape of affordable and practical CPUs in a wide variety of ways. If Nvidia does off the shelf cores, how will they differentiate? Probably with a segment of Nvidia sauce. So they can present some USP on their AI capability. Its going to be a massive bunch of bullshit. Much like RT there's going to be a black box 'NPU performance' to compare between constantly moving models and locally updated tools? Its going to fail spectacularly.
Posted on Reply
Add your own comment
Dec 18th, 2024 02:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts