Friday, January 29th 2021

AMD Files Patent for Chiplet Machine Learning Accelerator to be Paired With GPU, Cache Chiplets

AMD has filed a patent whereby they describe a MLA (Machine Learning Accelerator) chiplet design that can then be paired with a GPU unit (such as RDNA 3) and a cache unit (likely a GPU-excised version of AMD's Infinity Cache design debuted with RDNA 2) to create what AMD is calling an "APD" (Accelerated Processing Device). The design would thus enable AMD to create a chiplet-based machine learning accelerator whose sole function would be to accelerate machine learning - specifically, matrix multiplication. This would enable capabilities not unlike those available through NVIDIA's Tensor cores.

This could give AMD a modular way to add machine-learning capabilities to several of their designs through the inclusion of such a chiplet, and might be AMD's way of achieving hardware acceleration of a DLSS-like feature. This would avoid the shortcomings associated with implementing it in the GPU package itself - an increase in overall die area, with thus increased cost and reduced yields, while at the same time enabling AMD to deploy it in other products other than GPU packages. The patent describes the possibility of different manufacturing technologies being employed in the chiplet-based design - harkening back to the I/O modules in Ryzen CPUs, manufactured via a 12 nm process, and not the 7 nm one used for the core chiplets. The patent also describes acceleration of cache-requests from the GPU die to the cache chiplet, and on-the-fly usage of it as actual cache, or as directly-addressable memory.
Sources: Free Patents Online, via Reddit
Add your own comment

28 Comments on AMD Files Patent for Chiplet Machine Learning Accelerator to be Paired With GPU, Cache Chiplets

#1
FreedomEclipse
~Technological Technocrat~
Cache me outside, howboudah.
Posted on Reply
#2
TumbleGeorge
LoL. Artificial Intelect in GPU? Is possible to talk use street language with next gen graphic cards?
Posted on Reply
#3
Vya Domus
Interesting, clearly GPUs are being left in the dust by dedicated accelerators for certain computations. But dedicated accelerators are inflexible while GPUs are fully programable and can run just about anything and they both have a major problem, memory bandwidth. This is a nice way of solving all the problems.
Posted on Reply
#4
OGoc
With the purchase of Xilinx I suspect AMD will start reducing the number of hardware accelerated functions like encode and decode, machine learning, and will replace them with an FPGA section. Perfect for chiplet implementation.
Posted on Reply
#5
crimsontape
I'm loving this idea. I've been thinking about this for a while now. Add a little machine learning hardware to a APU SoC or a GPU, and things could be really interesting for both the general use, gaming and GPU-computing markets. Like OGoc said, match this with FPGA tech, and the potential becomes pretty obvious.
Posted on Reply
#7
ZoneDymo
dicktracyJust give it up AMD
it?

as in....

Posted on Reply
#8
zlobby
FreedomEclipseCache me outside, howboudah.
Top kek! :D
TumbleGeorgeLoL. Artificial Intelect in GPU? Is possible to talk use street language with next gen graphic cards?
Like 'Oi, bruv! Imma shagg some b*itches in GTA5, innit?'

Let's hope this AI doesn't pick the street lingo of the LoL community!
Posted on Reply
#10
dragontamer5788
OGocWith the purchase of Xilinx I suspect AMD will start reducing the number of hardware accelerated functions like encode and decode, machine learning, and will replace them with an FPGA section. Perfect for chiplet implementation.
Those Xilinx FPGAs are VLIW SIMD cores, probably more similarities to a GPU than you might think.

Yeah, there are some LUTs on those FPGAs, but the actual computational girth comes from these babies: www.xilinx.com/support/documentation/white_papers/wp506-ai-engine.pdf
Posted on Reply
#11
DonKnotts
I'm sorry, I just don't want them to add yet another reason to raise the prices of these already far too expensive graphics cards. I have 0 excitement for this.
Posted on Reply
#12
TheoneandonlyMrK
I would love to see AMD and intel secret sauce chiplets of the future lists, this isn't unexpected what arm, apple and many more do with specific hardware X86 will leverage more heavy-hitting but adaptable circuitry.
would be nice if Amd got one Api ish too.
Posted on Reply
#13
Wirko
Paganstomp
Chicklets Machine...
Oh, so that's how a poor man's ASML machine looks like. Sweet.
Posted on Reply
#14
Aquinus
Resident Wat-man
You get a chiplet, and you get a chiplet, and you get a chiplet! Chiplets for everyone! WOO!

In all seriousness, this just sounds like AMD doing more of the same thing they've been working towards for years. You have an I/O chiplet, and a CPU chiplet, and soon we'll have GPU chiplets and AI accelerator chiplets. We've already seen that this can scale well, so this should be an exciting prospect for future products. An APU with one of all of the above would be one hell of a chip.
Posted on Reply
#15
1d10t
Apparently first attempt of RT implementation didn't go well and AMD trying to solve it with another "glue". With another bump in cache size and leaning towards agnostic function, I can see wider adoption not just RT in gaming.
Posted on Reply
#16
dragontamer5788
1d10tApparently first attempt of RT implementation didn't go well and AMD trying to solve it with another "glue". With another bump in cache size and leaning towards agnostic function, I can see wider adoption not just RT in gaming.
AMD has had so many patents over the years that I've basically stopped paying attention to patents in general.

Remember "Super ALUs" ?? Yeah, they're not around. AMD decided against them for whatever reason. Maybe it wasn't as good as other techniques they got, or maybe they ran some simulations and it could have made things worse. Just wait for the whitepapers to come out.
Posted on Reply
#17
Vayra86
The next AMD meme

MOAR CHIPLUTZ
Posted on Reply
#18
Vya Domus
1d10tApparently first attempt of RT implementation didn't go well and AMD trying to solve it with another "glue".
This has zero to do with RT.
Posted on Reply
#19
Nkd
DonKnottsI'm sorry, I just don't want them to add yet another reason to raise the prices of these already far too expensive graphics cards. I have 0 excitement for this.
what are you talking about? this actually makes shit cheaper because you are not making one big fat die like Nvidia, because eventually you are going to need chiplets because you are not going to keep shrinking forever. AMD is just ahead of everyone and have been working towards this for years. You have nothing to worry about lol.
dragontamer5788AMD has had so many patents over the years that I've basically stopped paying attention to patents in general.

Remember "Super ALUs" ?? Yeah, they're not around. AMD decided against them for whatever reason. Maybe it wasn't as good as other techniques they got, or maybe they ran some simulations and it could have made things worse. Just wait for the whitepapers to come out.
Clearly this is totally different and fits exactly in to their future gameplan. Not all Patents are the same, some do have big implications lol.
Posted on Reply
#20
TheoneandonlyMrK
Nkdwhat are you talking about? this actually makes shit cheaper because you are not making one big fat die like Nvidia, because eventually you are going to need chiplets because you are not going to keep shrinking forever. AMD is just ahead of everyone and have been working towards this for years. You have nothing to worry about lol.


Clearly this is totally different and fits exactly in to their future gameplan. Not all Patents are the same, some do have big implications lol.
I agree but think The point of chiplets is that they are one of few ways to make cutting edge nodes financially viable, as time goes by this is only going to escalate, by 2nm and euv processing as well as the increase in mask costs are putting the cost of a complete wafer up considerably and advanced packaging technology isn't cheaper packaging technology, AMD were ahead of the game but emib does rule some of that gain out, interesting times what with others still stuck on monolithic designs, apple for example.
Posted on Reply
#21
daehxxiD
Ohhh... This sounds quite promising! Shame most machine learning frameworks are written for cuda. Hope if this comes out, bigger frameworks like Tensorflow or PyTorch make use of it.
Posted on Reply
#22
Patriot
daehxxiDOhhh... This sounds quite promising! Shame most machine learning frameworks are written for cuda. Hope if this comes out, bigger frameworks like Tensorflow or PyTorch make use of it.
Tensorflow was made by google for their own TPU hardware. It works on anyone's stuff.
github.com/ROCmSoftwarePlatform/tensorflow-upstream Been supported via ROCm for a couple of years now.
PyTorch has been supported since ROCm 3.7, 4.01 is current. github.com/aieater/rocm_pytorch_informations

Nvidia's stuff is definitely a bit more plug and play, and AMD's engineering support is just now ramping, they have a long way to catch up.

There are a lot of interesting accelerators on the market now, its a fun time.
Posted on Reply
#23
1d10t
Vya DomusThis has zero to do with RT.
Bummer, I thought matrix multiplication sound like complex version of Fused Multiply Add :D
The design would thus enable AMD to create a chiplet-based machine learning accelerator whose sole function would be to accelerate machine learning - specifically, matrix multiplication
Posted on Reply
#24
voltage
years ago Intel created the first chiplets, why didn't they patent the idea then?? maybe they didn't do so because of that previous do nothing ceo they had? (I am referring to the ceo who was getting his noddle wet with an employee)
Posted on Reply
#25
pantherx12
voltageyears ago Intel created the first chiplets, why didn't they patent the idea then?? maybe they didn't do so because of that previous do nothing ceo they had? (I am referring to the ceo who was getting his noddle wet with an employee)
What chiplets are you referring too?

If it's the old dual core designs with two chips, they were two full blown single core chips on one package. Not quite the same as a chiplet .
Posted on Reply
Add your own comment
Dec 25th, 2024 19:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts