Monday, November 4th 2024

New Arm CPUs from NVIDIA Coming in 2025

According to DigiTimes, NVIDIA is reportedly targeting the high-end segment for its first consumer CPU attempt. Slated to arrive in 2025, NVIDIA is partnering with MediaTek to break into the AI PC market, currently being popularized by Qualcomm, Intel, and AMD. With Microsoft and Qualcomm laying the foundation for Windows-on-Arm (WoA) development, NVIDIA plans to join and leverage its massive ecosystem of partners to design and deliver regular applications and games for its Arm-based processors. At the same time, NVIDIA is also scheduled to launch "Blackwell" GPUs for consumers, which could end up in these AI PCs with an Arm CPU at its core.

NVIDIA's partner, MediaTek, has recently launched a big core SoC for mobile called Dimensity 9400. NVIDIA could use something like that as a base for its SoC and add its Blackwell IP to the mix. This would be similar to what Apple is doing with its Apple Silicon and the recent M4 Max chip, which is apparently the fastest CPU in single-threaded and multithreaded workloads, as per recent Geekbench recordings. For NVIDIA, the company already has a team of CPU designers that delivered its Grace CPU to enterprise/server customers. Using off-the-shelf Arm Neoverse IP, the company's customers are acquiring systems with Grace CPUs as fast as they are produced. This puts a lot of hope into NVIDIA's upcoming AI PC, which could offer a selling point no other WoA device currently provides, and that is tried and tested gaming-grade GPU with AI accelerators.
Sources: DigiTimes, via NotebookCheck
Add your own comment

54 Comments on New Arm CPUs from NVIDIA Coming in 2025

#27
Kapone33
It is obvious that Nvidia wants a piece of the Handheld market. If this is successful they seriously need to work on their software. Intel had the money to almost catch them for RT and AMD have a commanding lead in the Space. Cemented by the Claw. In this ultimate driver narrative you cannot recommend the Claw for Gaming even if it is only 10-12% behind because the narrative does the same thing with the 4090 vs 7900XTX argument even though one card is 3 times the other but the Claw is the same price as the Ally so. They will probably use 3nm from TSMC too but just because you have TSMC hardware does not mean that you don't need to make sure the code is up to snuff to pull the performance out. Just yesterday I had to turn Hyper RX on for City Skylines 2 as my population went above 700,000 so the CPU was pegging at 85+% usage the whole time. The temps even went past 70 C on the CPU so PC Games are really starting to take advantage of all of those cores. That is not even a console based Game either.
Posted on Reply
#28
nageme
ChomiqGotta milk that AI cow.
What, you mean you don't want an AI CPU in your AI PC, featuring an AI GPU, AI RAM, AI SSD, AI HDD, AI PSU, AI case with AI RGB AI LEDs, AI keyboard, AI mouse and AI headphones?
How will your run AI FreeCell on your AI OS? Don't tell me you're satisfied with just Blockchain FreeCell.
Posted on Reply
#29
wheresmycar
Is this what influenced Intel and AMD to further collaborate on x86?
Posted on Reply
#30
watzupken
tpuuser256I had a gut feeling this was going to happen.
When Apple succeeded in proving that ARM chips are capable of competing with x86 chips, it pretty much opened that door for more chip designers to do the same for Windows devices. Enter Qualcomm this year, and it again proves it is competitive, especially in the laptop/ mobile segment, which in itself is far more lucrative than desktops due to higher margins and volumes. So with Nvidia entering this space, this is Intel and AMD's worst nightmare happening. Qualcomm SD Elite is powerful, but hampered by not so good graphics and compatibility issues. By the time Nvidia joins the party, I would expect some of these compatibility early adoption issue to go away. And they are no newbie in the GPU space. So if AMD is still slowly spinning RDNA 3.5, 3.6, 3.7 or so, they are going to be in trouble in the next 2 years or so.
Posted on Reply
#31
Nhonho
I don't understand why Nvidia doesn't start its journey in the CPU market, from the beginning, with RISC-V CPUs.

Starting right away with RISC-V, there would be a vast amount of apps created by software developers that would run natively on their RISC-V CPUs.

If Nvidia launches ARM CPUs now and if in about 5 years they decide to launch RISC-V CPUs, the same old mess that we already know about will happen in hardware or software (or in both) to old ARM apps run on their future RISC-V CPUs.

The guy in this video, who knows a lot about CPU development, said that the RISC-V architecture is the best:
(watch from 28:50)
Posted on Reply
#32
Visible Noise
kapone32It is obvious that Nvidia wants a piece of the Handheld market.
Nvidia owns the handheld market - it’s called the Nintendo Switch.
Posted on Reply
#33
HHHAOR
It would be nice if the switch2 could use this chip.
Posted on Reply
#34
The Shield
For the God sake, the last thing we need is Nvidia entering in the CPU business and actively trying to increase market prices.
Dirt ChipI use to have an NV motherboard with an AMD processor in it- Athlon 3200+.
Good old times...
They were good indeed.
Posted on Reply
#35
TheinsanegamerN
The ShieldFor the God sake, the last thing we need is Nvidia entering in the CPU business and actively trying to increase market prices.
Competition INCREASES prices? LMAO OK.
NhonhoI don't understand why Nvidia doesn't start its journey in the CPU market, from the beginning, with RISC-V CPUs.

Starting right away with RISC-V, there would be a vast amount of apps created by software developers that would run natively on their RISC-V CPUs.

If Nvidia launches ARM CPUs now and if in about 5 years they decide to launch RISC-V CPUs, the same old mess that we already know about will happen in hardware or software (or in both) to old ARM apps run on their future RISC-V CPUs.

The guy in this video, who knows a lot about CPU development, said that the RISC-V architecture is the best:
(watch from 28:50)
Because RISC V isnt a competitive ISA. It still needs a lot of work, and so long as ARM already exists, financially it doesnt make sense to invest tens of billions to make RISC V work yet.
watzupkenWhen Apple succeeded in proving that ARM chips are capable of competing with x86 chips, it pretty much opened that door for more chip designers to do the same for Windows devices. Enter Qualcomm this year, and it again proves it is competitive, especially in the laptop/ mobile segment, which in itself is far more lucrative than desktops due to higher margins and volumes. So with Nvidia entering this space, this is Intel and AMD's worst nightmare happening. Qualcomm SD Elite is powerful, but hampered by not so good graphics and compatibility issues. By the time Nvidia joins the party, I would expect some of these compatibility early adoption issue to go away. And they are no newbie in the GPU space. So if AMD is still slowly spinning RDNA 3.5, 3.6, 3.7 or so, they are going to be in trouble in the next 2 years or so.
Those "compatibility issues" have existed for a decade by this point. If they have not been ironed out now, spoiler alert, they wont be ironed out next year either.

Apple proved that, in a vertically controlled stack, ARM can perform really well. Which isnt a surprise, they got powerPC to perform well too. Strangely that never caught on in the windows world either. We also have to consider size, the M series chips are MASSIVE. The M4 max is 28 billion transistors, over double a 7950x (13 billion). So, yeah, it better be faster.

What qualcomm has proved is that with similar sized cores its rather difficult to get x86 performance out of ARM. They're closer, but still not there, a tale that is 11 years old now.
Posted on Reply
#36
Assimilator
NhonhoI don't understand why Nvidia doesn't start its journey in the CPU market, from the beginning, with RISC-V CPUs.
Because RISC-V is politically compromised since so many Chinese vendors have contributed to its specification; no Western company that's serious about selling lots of semiconductors is going to touch it with a barge pole because no Western government is going to allow it in anything that government buys.
TheinsanegamerNApple proved that, in a vertically controlled stack, ARM can perform really well. Which isnt a surprise, they got powerPC to perform well too. Strangely that never caught on in the windows world either. We also have to consider size, the M series chips are MASSIVE. The M4 max is 28 billion transistors, over double a 7950x (13 billion). So, yeah, it better be faster.
Apple CPUs also have at least double the number of memory channels that consumer x86 parts do. That's one of the reasons their showing is so good in synthetic benchmarks, and also why synthetic benchmarks are garbage.
Posted on Reply
#37
TheinsanegamerN
AssimilatorApple CPUs also have at least double the number of memory channels that consumer x86 parts do. That's one of the reasons their showing is so good in synthetic benchmarks, and also why synthetic benchmarks are garbage.
They also have sub processors, just like powerPC G series chips did, that do allow for incredible performance when optimized for, but are very situational oriented and dont pan out over a variety of software.
Posted on Reply
#38
swaaye
It's not like they ever left CPUs.

I wish the Shield Tablet X1 had been released. I still occasionally plug the Shield Tablet K1 into the TV and play some games or use it as a media player.
Posted on Reply
#40
Onasi
@Initialised
Lolno, I have been hearing about the supposedly inevitable demise of x86 as long as I have been a techie (so quite a while), about how it’s bloated and inefficient, yada yada. To absolutely no surprise, it’s still here and isn’t going anywhere. By the time I am absolutely cooked and will shuffle off this mortal coil I am willing to bet quite a substantial sum of money that x86 will still be here and still will be one of the dominant ISAs.
Posted on Reply
#41
Craptacular
InitialisedRIP x86, it's been a good run.

Jokes on death, Cell was a PowerPC cpu.
Posted on Reply
#42
The Shield
TheinsanegamerNCompetition INCREASES prices? LMAO OK.
The notorious Nvidia marketing, projected for bringing competition and decrease prices...oh...wait...
Posted on Reply
#43
R0H1T
InitialisedRIP x86, it's been a good run.

Meanwhile, in the real world ~
Posted on Reply
#44
R-T-B
TheinsanegamerNThey also have sub processors, just like powerPC G series chips did, that do allow for incredible performance when optimized for, but are very situational oriented and dont pan out over a variety of software.
FYI PowerPC never had "subprocessors." The only thing it had going on was AltiVec, an instruction set extension that at the time bested SSE. It was more like SSE3 level in capability, but if wasn't a literal "subprocessor."
Posted on Reply
#45
Vya Domus
dj-electricLots of heavy lifting is done on silicon to enable proper acceleration.
That's not the CPU doing all that.
tpuuser256I had a gut feeling this was going to happen.
You guys know they already tried thisbefore like a decade ago, right ?
Posted on Reply
#46
tpuuser256
Vya DomusThat's not the CPU doing all that.


You guys know they already tried thisbefore like a decade ago, right ?
Not until now for me, but it's a completely different context. Nowadays Windows on Arm is being pushed for more widespread adoption, mainly by improving software compatibility. NVidia is now targeting high-end consumer desktops/laptops.
Posted on Reply
#47
Nhonho
I want that Nvidia comes in putting the sole of its shoe on Intel and AMD's chest to these two companies launch new CPUs with higher IPC and lower power consumption.
Posted on Reply
#48
R-T-B
Vya DomusThat's not the CPU doing all that.
It absolutely is. This is like saying SSE or similar is "not the CPU."
NhonhoI want that Nvidia comes in putting the sole of its shoe on Intel and AMD's chest to these two companies launch new CPUs with higher IPC and lower power consumption.
It'd need to be a lot higher to overcome the emulation penalty on legacy x86 games, which is like... everything.
Posted on Reply
#49
Vya Domus
R-T-BIt absolutely is. This is like saying SSE or similar is "not the CPU."
What do you mean, all those benchmarks you see with crazy fast rendering times are using hardware encoding not the CPUs, high end x86 CPUs still absolutely destroy Apple chips in software video encoding.
Posted on Reply
#50
Nhonho
R-T-BIt'd need to be a lot higher to overcome the emulation penalty on legacy x86 games, which is like... everything.
Can you write an article about the duration of technology patents for the manufacture of CPUs, GPUs and chips in general? Is it true that they only last 20 years and then become public?

And does the age of a patent start counting on the date it is registered or from the date the chip is manufactured?

If this is true, other companies can already create clones of the Athlon 64 that natively run x86 code, although they may still not have access to the POPCNT instruction, necessary to run Windows 11 24H2, which may have been registered less than 20 years ago.

Nvidia can make the iGPU of its APUs and SoCs be used as a co-processor (and not just an image generator), in the same way as I wrote in this post about Intel and AMD:
NhonhoA good way for Intel and AMD to increase the performance of their x86 processors, in the face of the growth of their ARM and RISC-V competitors, would be if they both made the iGPU of their APUs and SoCs be used as a co-processor, which could be used by the OS, apps and even games, for general purpose (general processing). The iGPU should be used as a co-processor even by games run by a dedicated GPU (AIC/VGA).

The iGPU, being used as a co-processor, is capable of being dozens of times faster than x86 cores.

And, of course, there should be a standard between Intel and AMD processors in order to the same software can run on the iGPUs of both companies.

If Nvidia starts to act strongly in the ARM processor market, it can easily and quickly implement the above, as it already has ready all the GPU hardware technology and the software support for it and also has an extremely good relationship with software developers.
https://www.techpowerup.com/forums/threads/what-the-intel-amd-x86-ecosystem-advisory-group-is-and-what-its-not.327755/post-5356211
Posted on Reply
Add your own comment
Apr 11th, 2025 03:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts