• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Navi 31 RDNA3 GPU Pictured

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,683 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Here's the first picture of the "Navi 31" GPU at the heart of AMD's fastest next-generation graphics cards. Based on the RDNA3 graphics architecture, this will mark an ambitious attempt by AMD to build the first multi-chip module (MCM) client GPU featuring more than one logic die. MCM GPUs aren't new in the enterprise space with Intel's "Ponte Vecchio," but this would be the first such GPU meant for hardcore gaming graphics products. AMD had made MCM GPUs in the past, but those have been packages with just one logic die, surrounded by memory stacks. "Navi 31" is an MCM of as many as eight logic dies, and no memory stacks (no, those aren't HBM stacks in the picture below).

It's rumored that "Navi 31" features one or two SIMD chiplets dubbed GCDs, featuring the GPU's main number crunching machinery, the RDNA3 compute units. These chiplets are likely built on the most advanced silicon fabrication node, likely TSMC 5 nm EUV, but we'll see. The GDDR6 memory controllers handling the chip's 384-bit wide GDDR6 memory interface, will be located on separate chiplets built on a slightly older node, such as TSMC 6 nm. This is not multi-GPU-a-stick, because both SIMD chiplets have uniform access to the entire 384-bit wide memory bus (which is not 2x 192-bit but 1x 384-bit), besides the other ancillaries. The "Navi 31" MCM are expected to be surrounded by JEDEC-standard 20 Gbps GDDR6 memory chips.



View at TechPowerUp Main Site | Source
 
*licks*
 
Reviews tomorrow?
 
Let AMD have a "ZEN moment" with Radeon (except for the early bios things).
 
Let AMD have a "ZEN moment" with Radeon (except for the early bios things).

isn't rdna1 that, rdna2 a zen2 moment and rdna3 maybe zen3 ?
 
Let AMD have a "ZEN moment" with Radeon (except for the early bios things).
Why is everyone expecting issues with BIOS and drivers, it seems to me that AMD moved only "dumb" components like memory off the main chip, so there shouldn't be that many issues. It's not like these GPUs come with two graphics chiplets.
 
I can't wait for them to explain how these work. That is a huge BUS size. Even though you say it I can't help but see those chips as HBM chips lol.
 
Any, picture based, size estimates?
 
Any, picture based, size estimates?

The folks over at Videocardz have some thoughts about that:


For what it's worth...
 
You answered your own question. AMD couldnt get the BIOS right for their OWN MOTHERBOARDS., multiple times. It's not far fetched to expect instability at launch.
But motherboard manufacturers add lots of stuff to the motherboards, it's not like it's all AMD designed. For graphics cards it's almost all reference design, except maybe power delivery, but that's not something software touches.
 
Last edited:
But motherboard manufacturers add lots of stuff to the motherboards, it's not like it's all AMD designed. For graphics cards it's almost all reference design, except maybe power delivery, but that's not something software touches.
This is a common problem amongst value added products. Android for instance is a great OS except for all the third party crap spewed all over it.
 
Repasting this will be a nightmare.
 
The Radeon W6800X Duo in the 2019 Mac Pro connects two GPUs through IF at 84GB/s, and those GPUs are separated on the PCB and each GPU has its own memory. I’d have to imagine the bandwidth connection of GPUs on the same substrate would be much faster, and memory management won’t be divided anymore and cause potential work imbalances. Won’t be long before we get to see how effective this solution is. If nothing else, I’m excited to see something relatively new on the engineering front.
 
Good thing it's not something that needs to be done in the meaningful lifetime of the product.

It depends on the usage case.

Some people remove the stock cooler and use full length waterblocks for a custom cooling loop. Often people who do so will periodically clean the waterblock which requires removal and disassembly.

For typical users who stick with the stock cooler, there is rarely a need to reapply thermal compound unless cooling performance drops precipitously after time.

I certainly wouldn't buy a graphics card based on the whether or not the GPU package looked like it was conducive to thermal paste application. But apparently it's a big deal for some people online. That's fine: it's their money.
 
what was the point of this?
just a "die shot"
 
Last edited:
But motherboard manufacturers add lots of stuff to the motherboards, it's not like it's all AMD designed. For graphics cards it's almost all reference design, except maybe power delivery, but that's not something software touches.
Much like video card drivers, AGESA code does not come from board makers.

But of course, its never AMD's fault.
 
Back
Top