Tuesday, November 15th 2022
AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080
AMD in its technical presentation confirmed the reference clock speeds of the Radeon RX 7900 XTX and RX 7900 XT RDNA3 graphics cards. The company also made its first reference to a GeForce RTX 40-series "Ada" product, the RTX 4080 (16 GB), which is going to launch later today. The RX 7900 XTX maxes out the "Navi 31" silicon, featuring all 96 RDNA3 compute units or 6,144 stream processors; while the RX 7900 XT is configured with 84 compute units, or 5,376 stream processors. The two cards also differ with memory configuration. While the RX 7900 XTX gets 24 GB of 20 Gbps GDDR6 across a 384-bit memory interface (960 GB/s); the RX 7900 XT gets 20 GB of 20 Gbps GDDR6 across 320-bit (800 GB/s).
The RX 7900 XTX comes with a Game Clocks frequency of 2300 MHz, and 2500 MHz boost clocks, whereas the RX 7900 XT comes with 2000 MHz Game Clocks, and 2400 MHz boost clocks. The Game Clocks frequency is more relevant between the two. AMD achieves 20 GB memory on the RX 7900 XT by using ten 16 Gbit GDDR6 memory chips across a 320-bit wide memory bus created by disabling one of the six 64-bit MCDs, which also subtracts 16 MB from the GPU's 96 MB Infinity Cache memory, leaving the RX 7900 XT with 80 MB of it. The slide describing the specs of the two cards compares them to the GeForce RTX 4080, which is what the two could compete more against, especially given their pricing. The RX 7900 XTX is 16% cheaper than the RTX 4080, and the RX 7900 XT is 25% cheaper.
The RX 7900 XTX comes with a Game Clocks frequency of 2300 MHz, and 2500 MHz boost clocks, whereas the RX 7900 XT comes with 2000 MHz Game Clocks, and 2400 MHz boost clocks. The Game Clocks frequency is more relevant between the two. AMD achieves 20 GB memory on the RX 7900 XT by using ten 16 Gbit GDDR6 memory chips across a 320-bit wide memory bus created by disabling one of the six 64-bit MCDs, which also subtracts 16 MB from the GPU's 96 MB Infinity Cache memory, leaving the RX 7900 XT with 80 MB of it. The slide describing the specs of the two cards compares them to the GeForce RTX 4080, which is what the two could compete more against, especially given their pricing. The RX 7900 XTX is 16% cheaper than the RTX 4080, and the RX 7900 XT is 25% cheaper.
166 Comments on AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080
AMD troll us , and now i don't know if a have to wait a real 7900xtx in april with better overclock stability and performance or if they wait for the 7950xt , i'm brainfucked, i don't wanna wait 6 months.
and the leak said the first version of 7900xt/xtx got a issue about max frequency due to production mistake... hum.... interresting.... i'm disgusted
Don't know who the target market is for RT but I find it odd to imagine buying a 2k GPU to play games at 1080p RT. If you care about eye candy why is someone using a small or low res display.
Then factor in that you'll be using dlss to make it playable and now you don't need high refresh rate either.
And then consider most implementations of RT aren't much better looking than Bloom lighting (things have been shiny in games for a long time, did you ever think the light source felt unrealistic?) and the whole RT argument starts to feel really pathetic.
Nvidia is resting on its laurels to price 4080 so high.
Performance difference between RX 6950 XT and RTX 4080 is mere 26%.
AMD Radeon RX 6950 XT Specs | TechPowerUp GPU Database
The Radeon 7900 series graphics will have very wide headroom to make RTX 4080 DOA as it deserves to be. That's false. You can game at 4K with Radeon RX 6700 XT 12 GB for 400 money just fine.
ASUS Radeon RX 6750 XT STRIX OC Review - Battlefield V | TechPowerUp
So far, the best thing to come out of ray tracing is that its absurd performance cost forced Nvidia to come up with DLSS, which in turn prompted AMD to develop FSR (and Intel to develop XeSS). These features are genuinely useful to people on low-to-mid-range hardware, which is almost everyone in the PC/console gaming space. AMD's implementation of the poorly named "DLSS 3" may continue that trend of RT-performance-compensators-that-prove-more-generally-attractive-than-RT-itself, albeit to a lesser extent.
(I single out AMD's implementation because Nvidia's version is limited to the 40-series, which is ludicrously expensive now and may not produce an affordable model at all, if Ampere stocks remain as high as I've been led to believe. RDNA3 may not have any affordable offerings either, but AMD's features are generally backwards compatible, if not totally open-source).
For people at the extreme high end, I get why RT is a major talking point. Look, if I were spending four figures on GPUs every two years, I'd want to know that I had the best of every feature too. It's a perfectly reasonable desire. And in the few current examples where RT is fully/properly implemented, RT does look better than raster, but the difference is subtle. You really have to look for it; you most likely wouldn't notice in the course of normal game play. Even if you do have among the best available RT hardware, it still isn't self-evident that enabling RT is worth the enormous trade off in performance.
RT's problem, in short, is that raster is damn pretty too. Developers have had a very long time to hone their methods of "faking" natural lighting in normal rasterized scenes, and they will continue to use and refine these techniques for many years to come because again, most of their audience isn't using RT-capable hardware.
I'm on a PC hardware enthusiast forum, so of course I care about RT, in the same way that I'd care about any shiny new innovation. I think that RT will one day become "the standard," but that day is still far in the future, effectively an eternity in the context of current hardware. If you aren't at the bleeding edge of the GPU-upgrade cycle, you shouldn't consider RT perf to be a deal-breaking factor in your purchasing decisions for quite some time to come, IMO. Decent RT perf simply costs too much, in a market where GPUs are already overpriced, generally.
Till next time :toast:
And here I was thinking that obvious trolling should not be supported, perhaps try harder next time to actually make a point, that post was beyond childish, reported. Incorrect, or at least not what I was asserting or I've ever seen anyone else assert.
The no RT crowd often say nobody/who cares, the pro RT crowd aren't saying everybody cares, they're saying they care, as a direct response to nobody cares, which is factually incorrect. Naturally, there is no one single factor that leads to this practice, it's a combination of yields, volume considerations, deliberate segmentation for professional/golden binned product lines and likely even more factors unknown to consumers.
My point is I don't think it's a shitty move, it's just business. The same business nobody is a saint in, they have shareholders to please and so on.
Again it's that I'd love to be able to strongly consider high end AMD GPU's, just like when they make compelling CPU's, I have no aversion or political reasoning behind who's products I'll buy (they all suck one way or another, so I just stick to the products themselves), but if those products can't or wont cater to what I want from a video card, it seriously narrows the choices. Hell even Intel's first stab at it was at least taking it more seriously than a compatibility checkbox.
Some people are somewhat possets about product naming to the point it is disrupt their grasp of realty, starting to imagine "real" future product as if the one at hand doesn't exist.
By linking X9xx\Xx9x to "the best" you are on a colliding course with realty and realty stronger.
The realty: It is just a name and it doesn't promise anything, despite trying to force it on that, about the product performance or future product.
Spec backup by benchmark promise you performance.
Nothing will promise you anything about future product, the next day or the next year, and hopelessly trying to seize control about it will resolute in frustration and/or anger.
No wrong about that (to a point) but at least know where you are enter.
Put simply: any business has two core functions, to privide useful goods and/or services, and to provide gainful employment to people. Everything else is secondary to that, and any business that prioritizes anything other than that has lost the plot. Obviously I'm not so naive as to think that this is in any way a dominant mode of thinking today, but there are still degrees of difference - and what we're seeing here, and that's where I'm arguing that Nvidia is making shitty moves. They're exploitative, they're profiteering, they're laser focused on profits and margins above all else, and are making decisions that actively harm consumers to maximise profits.
My problem with your line of reasoning isn't so much that you don't hold to the same ideals as me, but that you're pretending that these actions are value neutral. They aren't. There are other possible choices, which would still have been profitable for Nvidia, that they don't make as they are less profitable, even if these would serve their customers better. This is in other words Nvidia actively choosing to maximize profits at the cost of the consumer.
(Obviously I'm also not saying other corporations are much better. AMD is pretty close to the same patterns these days, though not quite and not with the same history. Intel has historically been far worse. Etc.)