Monday, August 14th 2023

AMD "Navi 4C" GPU Detailed: Shader Engines are their own Chiplets

"Navi 4C" is a future high-end GPU from AMD that will likely not see the light of day, as the company is pivoting away from the high-end GPU segment with its next RDNA4 generation. For AMD to continue investing in the development of this GPU, the gaming graphics card segment should have posted better sales, especially in the high-end, which it didn't. Moore's Law is Dead scored details of what could have been a fascinating technological endeavor for AMD, in building a highly disaggregated GPU.

AMD's current "Navi 31" GPU sees a disaggregation of the main logic components of the GPU that benefit from the latest 5 nm foundry node to be located in a central Graphics Compute Die; surrounded by up to six little chiplets built on the older 6 nm foundry node, which contain segments of the GPU's Infinity Cache memory, and its memory interface—hence the name memory cache die. With "Navi 4C," AMD had intended to further disaggregate the GPU, identifying even more components on the GCD that can be spun out into chiplets; as well as breaking up the shader engines themselves into smaller self-contained chiplets (smaller dies == greater yields and lower foundry costs).
The way AMD would go about creating "Navi 4C" would be using a vast array of packaging innovations that ensure the numerous kinds of chiplets talk to each other with as little latency as possible, as if they were parts of a whole monolithic die.

Assuming AMD had continued to use GDDR6 and not the newer GDDR7 memory standard, the company would have likely retained 6 nm MCDs from the current generation, to provide video memory interface and last-level cache to the GPU, minimizing R&D costs, and gaining from the further reduced foundry costs for the 6 nm node.

AMD identified the GPU's media acceleration engine, and Radiance Display Engine ripe for the next round of disaggregation. Although media acceleration engines are logic components, these are fixed function hardware, as are the display engines; and can likely make do with older foundry nodes. The media acceleration and display engine would be spun off into a separate chiplet called MID (media and I/O die). At this point we don't know if AMD would've gone with 6 nm or a newer node for the MID, but given that the company is able to pack the latest media and display I/O features onto the 6 nm "Navi 33" monolithic silicon, it's possible that the company would go with the older node.

Much of the semiconductor engineering muscle is centered on what happens to the most critical number-crunching machinery of the GPU, the Shader Engines. Apparently, AMD figured out that each Shader Engine, consisting of a fixed number of workgroup processors (WGPs), could be spun out into chiplets, called SEDs (shader engine dies). These would be built on an advanced foundry node. Given that NVIDIA is building its next-gen "Blackwell" GPUs on 3 nm, it's quite possible that AMD uses the same node for the SEDs.

The SEDs are seated on active interposer dies (AIDs). An interposer in general, is a silicon die whose sole purpose is to facilitate high-density microscopic wiring between chiplets stacked on top of it, with wiring densities that otherwise wouldn't be possible through fiberglass substrate. The "active" part of the AID refers to the ability of the interposer not just to facilitate wiring among dies stacked on top and to the substrate below, but also neighboring AIDs. For this purpose, TSMC innovated the COW-L (chip-on-wafer-L) bridges.

These are tiny silicon dies designed for inter-AID high-density wiring, and is how a mesh of AIDs talk to each other, and to the MID. As for how they communicate with the MCDs, remains to be seen. The current generation of MCDs are connected with the GCD using Infinity Fan-out Links—a high-density wiring method that makes do with the fiberglass substrate as the medium, instead of silicon. Should AMD be using another method to connect the MCDs, it would mean that the company is using a newer generation of them. Besides COW-L bridges, AMD is also leveraging TSMC's COW-V TSVs (through-silicon via) innovations for connecting the SEDs to the package substrate (for power and other I/O).

Alas, it's highly unlikely that "Navi 4C" will ever get off the drawing board. AMD has already implemented many of these packaging innovations with its latest MI300 compute processor based on the CDNA3 architecture, and it would have been incredible to see them in the gaming graphics segment, however, basic economics prevent AMD from investing in further development of the "Navi 4C." The gaming graphics card market is in its biggest slump since the late 2010s, and the enthusiast-class GPU caters to a niche market.

The current market conditions are a far cry from 2021, when the crypto-currency mining gold-rush had incentivized GPU manufacturers to make bigger GPUs. AMD is rumored to have mistimed the launch of its RX 6950 XT GPU toward the tail-end of the crypto boom. By that point, "Navi 31" had reached an advanced level of development and was ready to enter mass-production. The company now probably finds itself unable to justify the cost of development for "Navi 4C" beyond the concepts in this article, unless the market undergoes another dramatic upsurge in demand at the high-end.
Sources: Moore's Law is Dead (YouTube), VideoCardz
Add your own comment

33 Comments on AMD "Navi 4C" GPU Detailed: Shader Engines are their own Chiplets

#1
wolf
Performance Enthusiast
A MLID "prediction" that might not see the light of day? colour me shocked, shocked!

An interesting approach if true, fascinating even, but after being let down oh so many times, I just cannot take any information underpinned by this person seriously.
Posted on Reply
#2
HOkay
I don't really get this "the market is slow today so let's not build anything for tomorrow" logic. Given the timescales this would be targeting, it's very possible everyone will want new high end graphics cards by then. If however they don't think they can compete with Nvidia at the top end so are withdrawing for a generation for that reason, fine, that makes sense.
Posted on Reply
#3
Jism
In a way we're going back to one card holding various chips for each and responsible of their own task. No more monolithic.
Posted on Reply
#4
N/A
Great, Chiclets as big as rice grain. Just give me the whole wafer.
Posted on Reply
#5
AusWolf
This is way too many abbreviations in one news piece. Has AMD got a dedicated team to make these all up?
HOkayI don't really get this "the market is slow today so let's not build anything for tomorrow" logic. Given the timescales this would be targeting, it's very possible everyone will want new high end graphics cards by then. If however they don't think they can compete with Nvidia at the top end so are withdrawing for a generation for that reason, fine, that makes sense.
It's not "the market is slow today so let's not build anything for tomorrow", but rather "the market is slow today, so instead of the incremental upticks, let's spend more time developing something that's actually worth buying". I quite welcome this change of pace, to be honest.
Posted on Reply
#6
TechLurker
I'm guessing besides the slump, it's also due to how new and experimental the tech is. Supposedly AMD also cancelled other GPU chip plans because they didn't meet AMD's minimum standards (assuming it wasn't this one). Letting it sit for a generation while testing it out in the enterprise sector is one way of helping to refine the design and also bring costs down.

Or it can be a red herring and AMD will surprise us by pushing the limits even on a slow year, in the hopes of catching Nvidia off-guard even temporarily.

We'll just have to wait and see.
Posted on Reply
#7
Dr. Dro
I wish the tech press would stop spreading MLIDs fan level made up BS for clicks... especially TPU, which I hold to a higher standard
Posted on Reply
#8
Unregistered
There was a shift in GPU segmentation since Turing, the 2080ti should've been the 80 card, an hypothetical 3090ti super the 80 card for ampere, and the 4090 the 80 for Ada. What used to be High end is now ultra enthusiast.

On the other hand, the focus is now AI (or was for nVidia since Turing) gamers are being sold what's left with a slight discount.
#9
srekal34
They did not cancel it because the sales did not meet the expectation. They cancelled it because they are not able to keep up. Probably the estimated performance of 4c was just bad conpared to competition. If 7900xtx would have been competitive, it would have sold similarily to 4080. Dont start the perf/$ discussion - in this segment its not that important - features and overall QoL is much more important, where 7900xtx just lacks.
Posted on Reply
#10
Broken Processor
I don't understand AMD these day's, they released 7000 series knowing it was borked and where working on a fixed refresh that got cancelled instead of cancelling 7000 and releasing the fixed version that would have given them time to sort out high end 8000.
Posted on Reply
#11
Dimitriman
I think anything MLID related should have a giant "RUMOR" tag in front of it...

If AMD cancelled big Navi4 then I hope its because they want to move forward big Navi5 and actually compete with Nvidia again...
Intel, please do more before these companies kill PC gaming for their Ai greed.
Posted on Reply
#12
ixi
Xex360On the other hand, the focus is now AI (or was for nVidia since Turing) gamers are being sold what's left with a slight discount.
AI is just made up word to make people pay more. And kuku fellas are falling for it sadly.

AI will have real meaning when "computer" will do stuff on its own without human interfering.

Same as nvidia does with dlss. We use our AI for dlss. Yeah, right. There is no AI in this.
Posted on Reply
#13
Dr. Dro
srekal34They did not cancel it because the sales did not meet the expectation. They cancelled it because they are not able to keep up. Probably the estimated performance of 4c was just bad conpared to competition. If 7900xtx would have been competitive, it would have sold similarily to 4080. Dont start the perf/$ discussion - in this segment its not that important - features and overall QoL is much more important, where 7900xtx just lacks.
Precisely my reasoning
Posted on Reply
#14
ZoneDymo
ok for real, could I get some sort of block functionality on Techpowerup so that as soon as the source is MLID, I simply wont get to see?
Posted on Reply
#15
Assimilator
Leery of this particular rumour - it doesn't make much sense to cancel what appears to very much be the logical progression of Navi 3x.
Posted on Reply
#16
ViperXZ
Good article, yes everything makes somewhat more sense now. The market slump will continue, as GPUs get stronger, the demand for "Ultra-Enthusiast" (as AMD calls it), isn't going up, it's stagnating. Many people just don't see the need for 4K Ultra or 4K, they make do nicely with 1080p or 1440p or some kind of ultra-wide resolutions that are still lighter than 4K. Enthusiasts alone, are not enough to produce a GPU that then also most compete with Nvidia who always takes the bulk of those customers.
Posted on Reply
#17
mrnagant
ViperXZGood article, yes everything makes somewhat more sense now. The market slump will continue, as GPUs get stronger, the demand for "Ultra-Enthusiast" (as AMD calls it), isn't going up, it's stagnating. Many people just don't see the need for 4K Ultra or 4K, they make do nicely with 1080p or 1440p or some kind of ultra-wide resolutions that are still lighter than 4K. Enthusiasts alone, are not enough to produce a GPU that then also most compete with Nvidia who always takes the bulk of those customers.
It isn't just the 4k crowd but also the high refresh rate crowd.

Plus for not releasing something high-end, I would wonder if AMD is just going to do something highend that is kinda out of band with their primary product line again. Fury 2015 60CU, Vega 2017 64CU, VII 2019 60CU.
Posted on Reply
#18
ViperXZ
mrnagantPlus for not releasing something high-end, I would wonder if AMD is just going to do something highend that is kinda out of band with their primary product line again. Fury 2015 60CU, Vega 2017 64CU, VII 2019 60CU.
I don't think so, those were all unsuccessful low-margin products.
Posted on Reply
#19
Firedrops
AusWolfThis is way too many abbreviations in one news piece. Has AMD got a dedicated team to make these all up?
I lost it when they called something AIDS, and the AIDS started talking to each other(?!)
Posted on Reply
#20
Denver
HOkayI don't really get this "the market is slow today so let's not build anything for tomorrow" logic. Given the timescales this would be targeting, it's very possible everyone will want new high end graphics cards by then. If however they don't think they can compete with Nvidia at the top end so are withdrawing for a generation for that reason, fine, that makes sense.
The logic is more like this "New processes, designs and the whole development process is getting more and more expensive, while the demand is cooling down, so every move has become extremely risky (and expensive) and may not pay off, there is no room for error. So let's make safe moves instead of aggressive(buggy) innovations. "
Posted on Reply
#21
Minus Infinity
"Navi 4C" is a future high-end GPU from AMD that will likely not see the light of day, as the company is pivoting away from the high-end GPU segment with its next RDNA4 generation."

This does not mean in way AMD is abandoning the high end. RDNA4 is a much more complex design than RDNA3 and there were going to be over 2x the number of chiplets. SOurces inside AMD have said they were struggling to get the design to work and performance would be only minimally improved over RDNA3. Rather than eat tons of resources trying to get it to work which would be delays not just on RDNA4 but also RDNA5 they are just sticking to the lower end monolithic N33/34 designs for RDNA4 which are progressing well and will see huge uplifts in RTing. The high end will just shift to RDNA5. Given Blackewell is not out until 2025 AMD is not going to be that disadvantaged by being without high RDNA4. RDNA5 might be out late 2025 say 6 months or so after Blackwell and in the long run that will mean they have far stronger competitors to high end Blackwell. IMO as long as they can get N43's 8600 level card to be a much stronger offering and more like a 7700XT in raster but with much stronger RT and hardware accelerated FSR3 they sell a ton. Being on 3nm they could pack in a lot more CU's, give it 192bit bus, GDDR7, 12GB for say $299 that would slay upcoming 7700XT.
Posted on Reply
#22
Bwaze
This could all be just a reflection of change of marketing focus.

I'm pretty sure all the next GPU generations from all makers, Nvidia, AMD, Intel will be marketed as "designed for AI acceleration".

Of course, there will be products that indeed are focussed on that, but I think GPU makers are betting on an AI craze similar to cryptomining craze, where buyers will try to outbid one another in attempt to buy as much "gaming" cards as possible, for smaller, home AI generators...
Posted on Reply
#23
R0H1T
JismIn a way we're going back to one card holding various chips for each and responsible of their own task. No more monolithic.
BwazeI'm pretty sure all the next GPU generations from all makers, Nvidia, AMD, Intel will be marketed as "designed for AI acceleration".
I doubt that, for most consumers AI is just an extra addon at best.
Posted on Reply
#24
Bwaze
R0H1TI doubt that, for most consumers AI is just an extra addon at best.
Sure, for gamers. Just as most gamers didn't care you could mine cryptocoins with gaming cards. But the people that spent 2x MSRP for gaming cards weren't gamers. And I bet Nvidia is hoping for an AI craze to trickle down to individual, home users. You have free Midjourney-like art generator tools like Stable Diffusion that work on your home PC, and you need powerful GPU for it. And you can build powerful AI servers with gaming GPUs.
Posted on Reply
#25
mama
Shame if true. Innovation should always be encouraged. Taking risks is where success lives.
Posted on Reply
Add your own comment
May 21st, 2024 12:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts