Friday, June 23rd 2023

Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?
AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.
The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
Source:
Moore's Law is Dead (YouTube)
The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
169 Comments on Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?
We're simply looking at a power state issue here, one that has plagued every other gen of cards in either camp historically, and its always multi monitor or desktop light usage being the culprit. RDNA3 'went wrong' is jumping to conclusions. The 3090ti didn't go wrong either, it just has a different set of power states and voltages to eek out that 5% gain over its sibling.
Get real. Seriously. Or simply adjust your expectations. This shit has been happening since Turing and its the new norm, no, you don't need to upgrade every gen, it was never a good idea, and it certainly isn't now.
Its funny you mention Fury X in this context btw :D What's happening in that head of yours I wonder... are you gearing up to get a 4090 after all, or just expressing general disappointment?
Regarding Fury X... just another chapter of my prolonged love-hate relationship with AMD. It'll take quite some time for me to stop resenting them for EOL'ing it with high severity bugs I've reported going unfixed.
As for insults... don't even worry, none seen or taken, we're discussing a thing, people shouldn't have to walk a tight rope doing so. Fury X was arguably the worst GPU ever in terms of support... yeah. Total fail and we can blame Raja"HBM"Koduri.
If the 7900 XTX hadn't failed, it would have been matching the 4090, just as the 6900 XT once matched the 3090.
Having owned Navi31 without issue since launch I often wonder where y'all get your issues from.
A bug dropping performance marginally that was rumoured but not proven to have happened Still ended up with a card that runs flawlessly smooth and fast in all games.
All GPU designs end up with faults in the design Errata lists same with CPU just one company gets noted though by some here, realistic, not.
N5 should have brought a 20% performance (i.e. clocks) increase over N7, instead the 7900XTX usually clocks just as high as the 6900XT. And the N6 RX7600 clocks just as high as the N5 7900XTX.
My guess is AMD really had planned for the N5 RDNA3 chips to clock 20% higher, putting the N31 7900XTX and the N32 7800XT in the >2.7GHz range which would get the latter to be closer to the 6800XT in compute throughput (72CUs on N21 @ 2250MHz ~= 60 CUs on N32 @ 2700MHz).
Instead the new process brought them nearly zero clock advantages, and AMD now has to use the bigger N31 chip to bring some advantage over the 6800XT.
Big question now is if RDNA4 is solving these issues, since it's now clear that AMD hasn't been able to solve them on N32.
And the most ironic thing is how we do have RDNA3 GPUs clocking at >2700MHz at a low(ish) power usage inside the Phoenix SoC, but those are hopelessly bandwidth starved because AMD decided against putting L3 cache on it.
"Normal" RDNA video playback of ~40W isn't even remotely a problem. Yes, it's high compared to GeForce, but even the dinky 7900XT MBA cooler can manage to stay fanless at that level, most of the time.
When you see 80W video playback, that's not video playback power, that's AMD's perpetual lack of intermediate VRAM clocks overshadowing everything else and turning everything into 80-90W because VRAM is stuck at 2500MHz.
Time and time again an Adrenalin release is accompanied by reports that the multi monitor problem is "fixed". A quick trip to Reddit confirms plenty of run of the mill 2 monitor high refresh setups stay unchanged at 80-90W. All the Radeon team seem to be doing is just vetting specific display setups over time - if no major artifacting issues, allow it to drop VRAM clock. Which would actually eventually solve the problem, if AMD didn't have a habit of repeatedly regressing on the multi monitor problem and undoing all the progress they'd made every couple years.
Back to the point that GeForce cards aren't immune to variance, I get about half the idle power figures that w1zz had in reviews. But that's a difference of 10W vs. 20W, not 20W vs 80W. Wondrous what VRAM can do when it has access to more than just two modes. A shocking number of them all come from AMD's driver relationship with multi monitor setups. If you only have one screen of course it runs like a top, it's the ideal scenario.
No, I don't count "not hitting 3GHz" as a "bug", it just diminishes the stuff that actually matters. The performance is there, it's everything else. You can't just slap an engine on frame rails and sell it as a car, no matter how good that motor is.
There are serious gaps in RDNA3. The fact that I keep calling RDNA3 as a failure and as nothing more than RDNA2 and no one yet has come out and throw me charts showing how wrong I am, does say something. Hope they manage to take advantage of the new architectural advantages of RDNA3 with RDNA 3.5 or 4. Probably they where too much occupied into making this first generation of chiplets to work this time.
As for Nvidia. It is using VRAM as a kill switch at least from 15-20 years ago. All their architectures are probably build that way.
Many many years ago I was doing some simple benchmarks. I have kept a few. For example, 9800GT with 512MB of VRAM. Look what happens in the forth test when the VRAM goes over 512MBs
It becomes a slide show. I mean a TRUE slideshow. While performance is dropping from test 1 to test 2 to test 3 the way someone would expect, it tanks in the last test. I don't seem to have saved tests with an ATi/AMD card - might be somewhere - but I remember that ATi/AMD cards where not dropping dead when going over VRAM capacity.
15-20 years latter we see the same with the only difference this time AMD suffering the same (but at least usually offering more VRAM at the same price range)
I know of one, high power draw. The end.
So would be interested in knowing a shocking number of other bug's with proof.
I agree with you too and that is also what I am meaning to say when I sound so harsh towards these RX 7000 series cards. So would any chart, from any reviewer. On the best cases, the 7600 does barely 10% over the 6600 XT, and is actually pretty dead even to 4% faster than the 6650 XT. There is something wrong big time with RDNA 3, either it was an architectural gamble that AMD hoped it'd pay off but didn't, the drivers are hilariously and absolutely rotten, or they have severe hardware errata, it's gotta be one of these three, either way, it doesn't look good.
i guess no 7800xtx or it would just be hilarious :roll:
And more importantly, this Is made by the same people, but it isn't your card, so I think your issues might not apply to this rumour personally.