Monday, November 4th 2024

New Arm CPUs from NVIDIA Coming in 2025

According to DigiTimes, NVIDIA is reportedly targeting the high-end segment for its first consumer CPU attempt. Slated to arrive in 2025, NVIDIA is partnering with MediaTek to break into the AI PC market, currently being popularized by Qualcomm, Intel, and AMD. With Microsoft and Qualcomm laying the foundation for Windows-on-Arm (WoA) development, NVIDIA plans to join and leverage its massive ecosystem of partners to design and deliver regular applications and games for its Arm-based processors. At the same time, NVIDIA is also scheduled to launch "Blackwell" GPUs for consumers, which could end up in these AI PCs with an Arm CPU at its core.

NVIDIA's partner, MediaTek, has recently launched a big core SoC for mobile called Dimensity 9400. NVIDIA could use something like that as a base for its SoC and add its Blackwell IP to the mix. This would be similar to what Apple is doing with its Apple Silicon and the recent M4 Max chip, which is apparently the fastest CPU in single-threaded and multithreaded workloads, as per recent Geekbench recordings. For NVIDIA, the company already has a team of CPU designers that delivered its Grace CPU to enterprise/server customers. Using off-the-shelf Arm Neoverse IP, the company's customers are acquiring systems with Grace CPUs as fast as they are produced. This puts a lot of hope into NVIDIA's upcoming AI PC, which could offer a selling point no other WoA device currently provides, and that is tried and tested gaming-grade GPU with AI accelerators.
Sources: DigiTimes, via NotebookCheck
Add your own comment

54 Comments on New Arm CPUs from NVIDIA Coming in 2025

#51
lilhasselhoffer
NhonhoCan you write an article about the duration of technology patents for the manufacture of CPUs, GPUs and chips in general? Is it true that they only last 20 years and then become public?

And does the age of a patent start counting on the date it is registered or from the date the chip is manufactured?

If this is true, other companies can already create clones of the Athlon 64 that natively run x86 code, although they may still not have access to the POPCNT instruction, necessary to run Windows 11 24H2, which may have been registered less than 20 years ago.

Nvidia can make the iGPU of its APUs and SoCs be used as a co-processor (and not just an image generator), in the same way as I wrote in this post about Intel and AMD:


https://www.techpowerup.com/forums/threads/what-the-intel-amd-x86-ecosystem-advisory-group-is-and-what-its-not.327755/post-5356211
Let's break this one down...because what you are equating are two different things.

1) The type of patent matters. In the US you get 20 years on a utility or plant patent, and can renew it.
2) Intel owns a boat load of patents, which may stem from earlier ones...so no sane lithography company would create "an old Intel chip" for you.

3) Even if Intel only got the design patents, that's 14 or 15 years. Again, something quite harrowing when you are looking at Sandybridge theoretically making up the latest stuff you could copy...which requires a TPM module to work in Windows 11.
4) A lot of what Intel does is covered under trade secrets...which never expire. This is why we still don't know the herbs and spices that make KFC finger lickin' good.

5) Running an ARM CPU and using the GPU as a co-processor sounds great. If only all code was written to take advantage of the pipeline that GPUs require. As most programs are written to take advantage of way less that hundreds of cores...stream processors and the like are a bad way to arrange the computer's power.
6) 2008. Nvidia Tegra. They managed to go from Android, to Ubuntu, to Windows CE. I don't think they've got to invest a whole lot more given that.


Regarding that initial bit...no sane person would write about US patent law. There's a reason there are so many lawsuits, why executives jumping between companies is largely going to come with some attempt at trade secrets being sued over, and how patent law is a morass of "ineffective until tested in a court of law." There's a reason those patent lawyers are insanely rich, and why it takes decades to hammer out any settlements. When you've got patents for circuitry it'll take a small army of lawyers swarming over all of the other filed patents to determine if things are actually capable of passing the bar to be patented...and if that doesn't give you an aneurism then the costs of that litigation should.
Posted on Reply
#52
R-T-B
Vya DomusWhat do you mean, all those benchmarks you see with crazy fast rendering times are using hardware encoding not the CPUs, high end x86 CPUs still absolutely destroy Apple chips in software video encoding.
Oh sorry I got mixed up. Yeah video encoding is seperate.

I do think the gap is far closer as of the M4, if not closed entirely, software encoding wise. But yeah.
Posted on Reply
#53
Nhonho
lilhasselhofferAs most programs are written to take advantage of way less that hundreds of cores...stream processors and the like are a bad way to arrange the computer's power.
I didn't say that all apps and all app operations can be performed by GPUs.

I said that some operations of some apps can be performed very well on GPUs, and in these cases, the performance gain is tens of times greater than if the same operation were performed by CPU cores.

And the first of the big 3 (Intel, AMD, Nvidia) to implement this will sell a lot of processors.

AMD engineers had this idea of certain operations being executed by GPUs more than 20 years ago. This was even one of the reasons AMD bought ATI.

Since Nvidia will enter the APU and SoC market strongly, it can quickly implement this idea because it has all the necessary technologies for this already ready. And Nvidia always had an excellent relationship with software developers to they quickly optimize their software to take advantage of all the resources of Nvidia hardware.

------------------------

Do you know why Apple hasn't yet put hardware encoding of AV1 videos on its Mx SoCs?
Posted on Reply
#54
firejohn
Dirt ChipI use to have an NV motherboard with an AMD processor in it- Athlon 3200+.
Good old times...
:) ah right, i forgot, i had also a nForce or nForce 2 mainboard. It introduced dual-channel RAM for consumers. I had to read again on Wikipedia, i forget things i used to know.
Posted on Reply
Add your own comment
Feb 5th, 2025 01:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts