• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New Arm CPUs from NVIDIA Coming in 2025

Joined
Oct 24, 2022
Messages
190 (0.25/day)
It'd need to be a lot higher to overcome the emulation penalty on legacy x86 games, which is like... everything.

Can you write an article about the duration of technology patents for the manufacture of CPUs, GPUs and chips in general? Is it true that they only last 20 years and then become public?

And does the age of a patent start counting on the date it is registered or from the date the chip is manufactured?

If this is true, other companies can already create clones of the Athlon 64 that natively run x86 code, although they may still not have access to the POPCNT instruction, necessary to run Windows 11 24H2, which may have been registered less than 20 years ago.

Nvidia can make the iGPU of its APUs and SoCs be used as a co-processor (and not just an image generator), in the same way as I wrote in this post about Intel and AMD:

A good way for Intel and AMD to increase the performance of their x86 processors, in the face of the growth of their ARM and RISC-V competitors, would be if they both made the iGPU of their APUs and SoCs be used as a co-processor, which could be used by the OS, apps and even games, for general purpose (general processing). The iGPU should be used as a co-processor even by games run by a dedicated GPU (AIC/VGA).

The iGPU, being used as a co-processor, is capable of being dozens of times faster than x86 cores.

And, of course, there should be a standard between Intel and AMD processors in order to the same software can run on the iGPUs of both companies.

If Nvidia starts to act strongly in the ARM processor market, it can easily and quickly implement the above, as it already has ready all the GPU hardware technology and the software support for it and also has an extremely good relationship with software developers.
Code:
https://www.techpowerup.com/forums/threads/what-the-intel-amd-x86-ecosystem-advisory-group-is-and-what-its-not.327755/post-5356211
 
Joined
Apr 2, 2011
Messages
2,810 (0.56/day)
Can you write an article about the duration of technology patents for the manufacture of CPUs, GPUs and chips in general? Is it true that they only last 20 years and then become public?

And does the age of a patent start counting on the date it is registered or from the date the chip is manufactured?

If this is true, other companies can already create clones of the Athlon 64 that natively run x86 code, although they may still not have access to the POPCNT instruction, necessary to run Windows 11 24H2, which may have been registered less than 20 years ago.

Nvidia can make the iGPU of its APUs and SoCs be used as a co-processor (and not just an image generator), in the same way as I wrote in this post about Intel and AMD:


Code:
https://www.techpowerup.com/forums/threads/what-the-intel-amd-x86-ecosystem-advisory-group-is-and-what-its-not.327755/post-5356211

Let's break this one down...because what you are equating are two different things.

1) The type of patent matters. In the US you get 20 years on a utility or plant patent, and can renew it.
2) Intel owns a boat load of patents, which may stem from earlier ones...so no sane lithography company would create "an old Intel chip" for you.

3) Even if Intel only got the design patents, that's 14 or 15 years. Again, something quite harrowing when you are looking at Sandybridge theoretically making up the latest stuff you could copy...which requires a TPM module to work in Windows 11.
4) A lot of what Intel does is covered under trade secrets...which never expire. This is why we still don't know the herbs and spices that make KFC finger lickin' good.

5) Running an ARM CPU and using the GPU as a co-processor sounds great. If only all code was written to take advantage of the pipeline that GPUs require. As most programs are written to take advantage of way less that hundreds of cores...stream processors and the like are a bad way to arrange the computer's power.
6) 2008. Nvidia Tegra. They managed to go from Android, to Ubuntu, to Windows CE. I don't think they've got to invest a whole lot more given that.


Regarding that initial bit...no sane person would write about US patent law. There's a reason there are so many lawsuits, why executives jumping between companies is largely going to come with some attempt at trade secrets being sued over, and how patent law is a morass of "ineffective until tested in a court of law." There's a reason those patent lawyers are insanely rich, and why it takes decades to hammer out any settlements. When you've got patents for circuitry it'll take a small army of lawyers swarming over all of the other filed patents to determine if things are actually capable of passing the bar to be patented...and if that doesn't give you an aneurism then the costs of that litigation should.
 
Joined
Aug 20, 2007
Messages
21,453 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
What do you mean, all those benchmarks you see with crazy fast rendering times are using hardware encoding not the CPUs, high end x86 CPUs still absolutely destroy Apple chips in software video encoding.
Oh sorry I got mixed up. Yeah video encoding is seperate.

I do think the gap is far closer as of the M4, if not closed entirely, software encoding wise. But yeah.
 
Joined
Oct 24, 2022
Messages
190 (0.25/day)
As most programs are written to take advantage of way less that hundreds of cores...stream processors and the like are a bad way to arrange the computer's power.

I didn't say that all apps and all app operations can be performed by GPUs.

I said that some operations of some apps can be performed very well on GPUs, and in these cases, the performance gain is tens of times greater than if the same operation were performed by CPU cores.

And the first of the big 3 (Intel, AMD, Nvidia) to implement this will sell a lot of processors.

AMD engineers had this idea of certain operations being executed by GPUs more than 20 years ago. This was even one of the reasons AMD bought ATI.

Since Nvidia will enter the APU and SoC market strongly, it can quickly implement this idea because it has all the necessary technologies for this already ready. And Nvidia always had an excellent relationship with software developers to they quickly optimize their software to take advantage of all the resources of Nvidia hardware.

------------------------

Do you know why Apple hasn't yet put hardware encoding of AV1 videos on its Mx SoCs?
 
Last edited:
Joined
Jan 18, 2020
Messages
30 (0.02/day)
I use to have an NV motherboard with an AMD processor in it- Athlon 3200+.
Good old times...
:) ah right, i forgot, i had also a nForce or nForce 2 mainboard. It introduced dual-channel RAM for consumers. I had to read again on Wikipedia, i forget things i used to know.
 
Top