Friday, June 23rd 2023

Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?
AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.
The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
Source:
Moore's Law is Dead (YouTube)
The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
169 Comments on Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?
Take another guess maybe, barely 60FPS in 2023 games :roll:
Without relying on Upscaling, 7900XT/X won't have the grunt going forward @ 4K60FPS (much less 4K144FPS), and that's where DLSS vs FSR will come into the equation
Edit: forgot Jedi
Then look at the 3090 and 3090 Ti. They're near the bottom, yet their successors are in the middle of the pack - almost halving power consumption. It's like one company is concerned with making sure their product is consistently improving in all areas generation-to-generation, while the other is sitting in the corner with their thumb up their a**.
The fact that AMD has managed to bring down low-load power consumption since the 7000-series launch IS NOT something to praise them for, because if they hadn't completely BROKEN power consumption with the launch of that series (AND THEIR TWO PREVIOUS GPU SERIES), they wouldn't have had to FIX it.
Now all of the chumps are going to whine "bUt It DoESn'T mattER Y u MaKIng a fUSS?" IT DOES MATTER because it shows that one of these companies cares about delivering a product where every aspect has been considered and worked on, and the other just throws their product over the fence when they think they're done with it and "oh well it's the users' problem now". If I'm laying down hundreds or thousands of dollars on something, I expect it to be POLISHED - and right now only one of these companies does that.
I was being kind when I said the 7600 is 20% more efficient. It's probably closer to 30%.
The 7800XT is shaping up to be a really meh product slightly faster than a 4070 in raster but worse at every other metric likely for around the same price. I guess we already have the real 7800XT though the 7900XT that is just priced stupid but starting to make more sense in the low 700s
7600 300€ vs 6650 249€
RTX 4060 6GB 350€ vs RTX 3060 12GB 300€
Arc 750 8GB 229€
Arc 380 6GB 134€ its still cheaper with its 6GB than garbage and old Cards like: RX 6400 139€, GTX 1630 148€, GTX 1650 172€
AMD Radeon RX 7600 Review - For 1080p Gamers - Power Consumption | TechPowerUp
Before answering remember this
7600 new arch, new node both should be bringing better efficiency for 7600. But what do we see? In some cases 6600XT with lower power consumption, in some cases 7600 with lower power consumption.
Unfortunately I have to agree with @Assimilator in his above comment. And I am mentioning Assimilator here because he knows my opinion about him. Simply put, he in in my ignore list!
But his comment here is correct. When AMD was with a tight budget, think Polaris period, it was almost their way of doing their business. Never in one generation power efficiency and performance improvements at the same time. Only one of those. Now that they DO have money they should have been able to do that. Work the final product in all aspects and come up with a new product that is better in everything compared to the previous one. But unfortunately they focused so much on CPUs, that they lost the boat with GPUs. And NOW with AI explosion they are realizing that GPUs are becoming more important. But are they going to throw money in all GPU architectures or only on Instinct? If they focus only on Instinct they will lose the next generation of consoles. At least Microsoft who I bet realized by now that to beat PlayStation they need to do something drastic. I hope at AMD to fear the possibility of MS or Sony or both going with Intel and Nvidia next gen. Those will be more expensive consoles to make, but gaming cards in PCs are also much more expensive, so higher prices on consoles isn't something we should rule it out as a possibility. You have the card, we look at numbers from TPU review. Either your card works as it should, which will be great and something went wrong with the review numbers, or something else is happening.
People where expecting 7900 series to be killers, in raster at least and at a lower power consumption. Based on TPU numbers that probably every one here disputes, 7900 XT is on average 14% faster than 6950XT with more shading units, more ROPS, more RT cores, more VRAM, more memory bandwidth. It's RDNA3 and still looks like a bigger RDNA2 chip. Do you even understand where my concerns are?
Maybe I should stop posting sinsere concerns and start posting pictures like the one below to make you happy.
I don't think AMD is trying to play catch up; I think we have lots of examples of them really doing their own thing to carve out a unique competitive position for the company.
- When Nvidia had RTX in play, AMD response was 'that's cool, but it won't go places until it gets mainstream market reach - midrange GPUs need to carry it comfortably'.
- RDNA2 had RT support but barely sacrifices hardware / die size for it.
- RDNA3 much the same even with what it does have.
Meanwhile, they rolled out a path of RDNA that was focused on chiplet. We're too early in the race here to say whether they made the right bet. But here are the long term trends we are looking at:
- Chiplets have caught up to Intel where monolithic was unable to for AMD
- Chiplet now enables them to remain close to Nvidia's top end with, frankly, not a whole lot of trouble
- Raster performance clearly remains a high performance affair, looking at newer engines, you'll keep a strong hardware requirement on it, with or without RT
- RT performance seems easy to lift through gen-to-gen general performance gains. It just takes time; at the same time, there is only a handful of titles that you can only really play well on Nvidia - even today, 3,5 gens past RT's release, even here, AMD's laid back approach to RT implementation isn't really breaking them.
- They still own the consoles and therefore have a final say in widespread market adoption wrt RT.
I'm really not seeing a badly positioned company or GPU line up here overall. That idea just applies to a minority that chases the latest greatest in graphics, the early adopter crowd, because that's really still where RT is at. AMD isn't playing catch up. They're doing a minimal effort thing on the stuff they don't care much about, seems like the proper rationale to me. You say barely anything happened between RDNA2 and 3, but...
the energy efficiency is virtually on par, both last and current gen. And the only GPU they can't match is the 4090 in raw perf.
AMD obviously does have the engineers to build great CPUs and GPUs. Now it needs to start investing those numbers where they should be invested. They thew 40 billions on Xilinx. It's time to start throwing more money on GPUs and software.
@Vayra86 I agree with most of your post. I was screaming about RT performance and was called Nvidia shill back then. With 7900XTX offering 3090/Ti RT performance, how could someone be negative about that? How can 3090/Ti performance being bad today when it was a dream yesterday?
The answer is marketing. I believe people buy RTX 3050 cards over RX 6600 because of all that RT fuss. RT is useless on an RTX 3050, but still people buy it, i believe for that reason. "RTX from Nvidia". Marketing. The same marketing won market share for AMD with the first Ryzen series. Much lower IPC compared to Intel CPUs but "8 CORES". The same marketing wins market share for Intel the last few years. "16 Cores CPU" and half of then are E cores. But people love to read higher numbers.
AMD should have doubled RT Cores in RDNA 3 even if that meant more die area, more expensive chips, lower profit margins.
PS
AMD Radeon RX 7600 Review - For 1080p Gamers - Power Consumption | TechPowerUp
Efficiency might be high in those charts, but the charts in the previous page say otherwise, or at least that efficiency of the chip isn't the same in every task. Seems that in cases, like video playback, there is a problem.
It will materialize for RDNA3. But that could still make it a miscalculation. 16GB would have been fine on the 7900XT's raw perf as it is now too... I think @londiste made a fine point there. Since the overwhelming majority of performance is still in raster and certainly not always with DLSS3 support, yeah, who cares huh.
Consider also the fact that playing most of the new stuff at launch right now, as it was for the last year(s), is an absolute horror show. Nothing released without breaking issues. To each their own. I've always gamed extremely well following the early adopter wave of each gen loosely. In GPUs its the same thing as with games. We get all sorts of pre-alpha shit pushed on us. The entire RTX brand is a pre-alpha release since Turing, and we could say they're in open beta now with Ada. Until you turn on path tracing. You're paying every cent twice over for Nvidia's live beta here, make no mistake. We didn't see them discount Ampere heavily like AMD did RDNA2, for example ;) In the meantime, I can honestly not always tell if RT is really adding much if anything beneficial to the scene. In Darktide, its an absolute mess. In Cyberpunk, the whole city becomes a glass house, sunsets are nice though, but when you turn RT off, not a single drop of atmosphere is really lost at all while the FPS doubles.
Anyone not blindly sheeping after the mainstream headlines, cares.
Also, do consider the fact Ampere 'can't get DLSS3' for whatever reason, the main one being an obvious Nvidia milking you hard situation. The power per buck advantage isn't just AMDs pricing, its Nvidia's approach to your upgrade path much more so.
It reminds me of the initial pascal launch.
Navi 31 is basically Vega without the HBM2, but it has worse power consumption, had to shift down a tier to make it look competitive with it's competition and likely a higher cost to make vs the tier it is now competing against because it is slower than anticipated. Nvidia flagship is also far away faster. If the RTX 4090 was less cutdown, we would be looking at a generational difference between two flagships.
If nvidia wanted to, they could likely price the RTX 4080 at $699 and the RTX 4070 ti(which should be an RTX 4070) at $500 dollars and destroy AMD's lineup and still make a decent margin. However there seems to be some price fixing going about or NV does not want to depreciate their last gen so much since there is so much on the market still.
What's making this fight appear closer than it is, is Nvidia spending silicon on tensor cores which enabled AMD to get a bit closer in performance. However this silicon cost on tensor cores is well spent since it enables Nvidia to tackle the AI market simultaneously with the same lineup. If AMD is losing ground on raster while going all out on raster, the division has poor future. The marketshare and revenue from the gaming market simply won't justify the R and D expense, on top of the large dies/low profit compared to other sector's AMD is competing in.
MI300 has GCD die with up to 38CUs.
Navi 41 could have three or even four of those dies.