Tuesday, October 29th 2024

Social Media Imagines AMD "Navi 48" RDNA 4 to be a Dual-Chiplet GPU

A Chinese tech forum ChipHell user who goes by zcjzcj11111 sprung up a fascinating take on what the next-generation AMD "Navi 48" GPU could be, and put their imagination on a render. Apparently, the "Navi 48," which powers AMD's series-topping performance-segment graphics card, is a dual chiplet-based design, similar to the company's latest Instinct MI300 series AI GPUs. This won't be a disaggregated GPU such as the "Navi 31" and "Navi 32," but rather a scale-out multi-chip module of two GPU dies that can otherwise run on their own in single-die packages. You want to call this a multi-GPU-on-a-stick? Go ahead, but there are a couple of changes.

On AMD's Instinct AI GPUs, the chiplets have full cache coherence with each other, and can address memory controlled by each other. This cache coherence makes the chiplets work like one giant chip. In a multi-GPU-on-a-stick, there would be no cache coherence, the two dies would be mapped by the host machine as two separate devices, and then you'd be at the mercy of implicit or explicit multi-GPU technologies for performance to scale. This isn't what's happening on AI GPUs—despite multiple chiplets, the GPU is seen by the host as a single PCI device with all its cache and memory visible to software as a contiguously addressable block.
We imagine the "Navi 48" is modeled along the same lines as the company's AI GPUs. The graphics driver sees this package as a single GPU. For this to work, the two chiplets are probably connected by Infinity Fabric Fanout links—an interconnect with a much higher amount of bandwidth than a serial bus like PCIe. This is probably needed for the cache coherence to be effective. The "Navi 44" is probably just one of these chiplets sitting its own package.

In the render, the substrate and package is made to resemble that of the "Navi 32," which tends to agree with the theory that "Navi 48" will be a performance segment GPU, and a successor to the "Navi 32," "Navi 22," and "Navi 10," rather than being a successor to enthusiast-segment GPUs like the "Navi 21" and "Navi 31." This much was made clear by AMD in its recent interviews with the media.

Do we think the ChipHell rumor is plausible? Absolutely, considering nobody took the very first such renders about the AM5 package having an oddly-shaped IHS seriously. The "Navi 48" being a chiplet-based GPU is something within character for a company like AMD, which loves chiplets, MCMs, and disaggregated devices.
Sources: ChipHell Forums, HXL (Twitter)
Add your own comment

59 Comments on Social Media Imagines AMD "Navi 48" RDNA 4 to be a Dual-Chiplet GPU

#1
hsew
With such an approach, their GPU compute dies would be much larger than they currently are. That would explain them exiting the high-end segment though… if that picture is anywhere near accurate, 2 dies would be a sensible maximum for a consumer product.
Posted on Reply
#2
Lycanwolfen
So basicly Crossfire on a single card. Or now Single chip.

Remember the good ole days of SLI.
Posted on Reply
#3
Bet0n
LycanwolfenSo basicly Crossfire on a single card. Or now Single chip.

Remember the good ole days of SLI.
Exactly not. As the "news" suggests the driver would see the two chips as one.
Posted on Reply
#4
GhostRyder
I know its a rumor/leak but its not surprising. AMD has been pushing/working on this idea for awhile. Would allow them to scale up performance with smaller chips allowing for (Hopefully) better yields. The issue is going to be making sure the chips can communicate fast enough so there is no latency.
Posted on Reply
#5
AsRock
TPU addict
Bet0nExactly not. As the "news" suggests the driver would see the two chips as one.
AMD "Navi 48" GPU could be
All to be seen yet.
Posted on Reply
#6
FreedomEclipse
~Technological Technocrat~
Ive been saying this since the dual core days when I was running my AMD64 X2 3800+ (939, Manchester, clocked at 2.66Ghz...) If we could have dual core CPUs then why couldnt we have dual core GPUs? Then we went down the road of having two GPUs on one PCB... those all died. crossfire died, SLi died and now we are finally going somewhere.

Its confused me for the longest time why we couldnt have a dual or quad chiplet GPu design. I thought that all the knowledge and experience that AMD gained working on desktop and server chips would carry over to the GPu side of things but it never did till now.
Posted on Reply
#7
Chaitanya
LycanwolfenSo basicly Crossfire on a single card. Or now Single chip.

Remember the good ole days of SLI.
This is similar to how Apple has been building their large size M chips for desktops not surprising to see Crossfire/SLI onto GPU die instead of through PCIe Slots.
Posted on Reply
#8
Space Lynx
Astronaut
I already thought 7900 XTX was a dual chiplet gpu? what am I missing... why is this big news?
Posted on Reply
#9
Lycanwolfen
Again driver see's one chip but it's actually dual core dual CPU.

Which is crossfire on a single chip with dual GPU.

And again it was done before. 25 years ago.
www.techpowerup.com/gpu-specs/voodoo5-5500-agp.c3531

I sure Nvidia will do the same with something soon a Dual GPU on a Single card.

History repeating itself again. Two was always better than one.

Or if you were one of the lucky ones to get the monster before they went out of business
www.techpowerup.com/gpu-specs/voodoo5-6000.c3536

Even today with some tweak this card can still play games It was way way ahead of it's time.
Posted on Reply
#10
Daven
Space LynxI already thought 7900 XTX was a dual chiplet gpu? what am I missing... why is this big news?
The 7900XTX is one GPU die with six memory controller/cache dies. This reimaging is two GPUs 'glued' together with each GPU containing everything: cores, cache, memory controller, etc.

If this is the way AMD goes, then I'm guessing each die has 40 CUs. That might also make sense with Strix Halo having one of these dies.
Posted on Reply
#11
Lycanwolfen
DavenThe 7900XTX is one GPU die with six memory controller/cache dies. This reimaging is two GPUs 'glued' together with each GPU containing everything: cores, cache, memory controller, etc.

If this is the way AMD goes, then I'm guessing each die has 40 CUs. That might also make sense with Strix Halo having one of these dies.
Ya well since the MB makers and Video card makers decided to drop crossfire and SLI because most of the hard core gamers became soft gamers no one wanted high end. I guess we were a dying breed of gamers that demanded the best from Video card companies. Now its all about the driver the DLSS and the software rendering.
Posted on Reply
#13
JohH
Nope
It's not happening.
Posted on Reply
#14
londiste
What exactly makes this rumor even remotely plausible?
Posted on Reply
#15
Bet0n
FreedomEclipseIve been saying this since the dual core days when I was running my AMD64 X2 3800+ (939, Manchester, clocked at 2.66Ghz...) If we could have dual core CPUs then why couldnt we have dual core GPUs? Then we went down the road of having two GPUs on one PCB... those all died. crossfire died, SLi died and now we are finally going somewhere.

Its confused me for the longest time why we couldnt have a dual or quad chiplet GPu design. I thought that all the knowledge and experience that AMD gained working on desktop and server chips would carry over to the GPu side of things but it never did till now.
Watch this, especially from 10:50
Posted on Reply
#16
windwhirl
FreedomEclipsethen why couldnt we have dual core GPUs?
We had something close to that with Crossfire and SLI.

I imagine that, due to GPUs having to present things on screen at a very fast rate and with extreme care of not resulting in, say, half the screen being a couple frames behind the other half of the screen, it's probably very complicated to do that when the two GPU "cores" are separated. Which is why SLI/Crossfire had their bridges at first and data synchronization over PCIE later, plus the performance gain wasn't that great.

Plus if this is exposed to the software, developers are gonna whine about having to code for basically explicit multi-GPU lite™.
Posted on Reply
#17
JohH
londisteWhat exactly makes this rumor even remotely plausible?
It's not even a rumor but someone on a forum asking "what if?" and somehow it becoming a news story.
Posted on Reply
#18
AnotherReader
windwhirlWe had something close to that with Crossfire and SLI.

I imagine that, due to GPUs having to present things on screen at a very fast rate and with extreme care of not resulting in, say, half the screen being a couple frames behind the other half of the screen, it's probably very complicated to do that when the two GPU "cores" are separated. Which is why SLI/Crossfire had their bridges at first and data synchronization over PCIE later, plus the performance gain wasn't that great.

Plus if this is exposed to the software, developers are gonna whine about having to code for basically explicit multi-GPU lite™.
If the die to die bandwidth is in the same ballpark as MI300X, then it won't be a problem. However, I'm very skeptical of this rumour. It makes sense for the putative high end RDNA 4, but doesn't seem likely for lower end products.

Posted on Reply
#19
londiste
Social media seems to be claiming also that Nvidia is working on a 3-GPU monster.
Posted on Reply
#20
Franzen4Real
londisteWhat exactly makes this rumor even remotely plausible?
This is what the nVidia B200 is doing, using two chiplets that behave as a single and are seen as a single die by the software. Or did you mean remotely plausible for AMD to pull off? Granted, the B200 is NOT a consumer level GPU...

"For their first multi-die chip, NVIDIA is intent on skipping the awkward “two accelerators on one chip” phase, and moving directly on to having the entire accelerator behave as a single chip. According to NVIDIA, the two dies operate as “one unified CUDA GPU”, offering full performance with no compromises."

www.anandtech.com/show/21310/nvidia-blackwell-architecture-and-b200b100-accelerators-announced-going-bigger-with-smaller-data
Space LynxI already thought 7900 XTX was a dual chiplet gpu? what am I missing... why is this big news?
It is different in that instead of making the entire GPU die on the 5nm node, they took the cache and memory controllers and fabbed them as chiplets on the older 6nm node because these parts do not benefits so much from a node shrink. All of the chiplets were then arranged to make a full die. This was an ingenious way to target the parts of the GPU getting the largest performance benefits of the 5nm node shrink, while saving cost by not using a cutting edge node on the parts that do not. Fantastic engineering in my opinion.

www.techpowerup.com/review/amd-radeon-rx-7900-xtx/
Posted on Reply
#21
evernessince
AnotherReaderIf the die to die bandwidth is in the same ballpark as MI300X, then it won't be a problem. However, I'm very skeptical of this rumour. It makes sense for the putative high end RDNA 4, but doesn't seem likely for lower end products.

My thoughts exactly. The MI300X is using a more expensive interconnect. The whole reason AMD went with an organic substrate on consumer cards is to save cost.

Doesn't make a lot of sense with the rumor that AMD is targeting mid-range and low end this gen unless they solved the cost issue.
Posted on Reply
#22
mate123
What's next? "AI Imagines AMD "Navi 58" RDNA 5 to be a Quad-Chiplet GPU"?
Posted on Reply
#23
Lycanwolfen
It' strange Nvidia did not go dual or quad GPU's on a single die. I mean it would have made sense, First power efficent for sure. Use 4 lower power GPU's into one die link them with Nvlink which they are using in their high end systems. You could have a single card GPU with like only 200 watts of power with more GPU power than a single big huge die sucking back 600 watts of power. I personally is still running an SLI rig. Still have twin 1080 ti GTX and games even out today run fine and dandy on them even though people say SLI is not supported. Ya it might not be supported in the Game as an Option to turn on. But in the driver is fully supported for all games. Most of the new games only see the one card even though the two is connected and Yes both are working in the game because both of them fans speed up and they are heating up. I also test this theory on some newer games with a watt o meter. I ran one game said no support so I run just one 1080 GTX TI measured wattage running game system was drawing about 460 watts in course of game then Switched back to SLI changed the driver config loaded up game and this time wattage was around 670 watts running the game full. So in turn it was using both card to render. I run 4k in all games so at 2160p one card is rendering the bottom and the other the top.

I remember back in the days of 1999, 2000 when people had two Diamond Monster Voodoo2's in SLI. I had a rig exactly the same. I met many people that bought the same setup and they complained they did not see much of a difference well guess what 90% of them did not know how to configure the drivers to make the game use the cards. I show them the settings what to change and where in the game and driver to change them and they were like OMG this is freaking awesome. Unreal orginial ran like butter on those Voodoo2's remember playing alot of ut tournament.

I'm an old school gamer. Heck yes my first home system was an atari 2600. So when voodoo's was around I was 25 year old. I'm an old timer now. So have lots of knowelege on systems since 1985 and on. It's like MS telling me my Core i7 3770k with 64 GB ram and 1 nvme 1TB and 500 SSD with a quadro card did not meet the min requirements to run windows 11 Pro. Um ok and your telling me a Core i5 that runs 2 GHZ slower than my i7 is going to be faster. I bypassed that BS and windows 11 on that computer at home runs faster then my work computers at office and they at i5 10500T's.

It's like comparing a harley motorcycle to and Ninja. Sure the Ninja has lots of CC's of power but that harley has raw power and it will win.
Posted on Reply
#25
Vya Domus
This was long speculated and of course sooner or later it will happen.
Posted on Reply
Add your own comment
Dec 11th, 2024 20:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts