Thursday, November 3rd 2022
AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion
AMD today announced the Radeon RX 7900 XTX and Radeon RX 7900 XT gaming graphics cards debuting its next-generation RDNA3 graphics architecture. The two new cards come at $999 and $899—basically targeting the $1000 high-end premium price point.Both cards will be available on December 13th, not only the AMD reference design, which is sold through AMD.com, but also custom-design variants from the many board partners on the same day. AIBs are expected to announce their products in the coming weeks.The RX 7900 XTX is priced at USD $999, and the RX 7900 XT is $899, which is a surprisingly small difference of only $100, for a performance difference that will certainly be larger, probably in the 20% range. Both Radeon RX 7900 XTX and RX 7900 XT are using the PCI-Express 4.0 interface, Gen 5 is not supported with this generation. The RX 7900 XTX has a typical board power of 355 W, or about 95 W less than that of the GeForce RTX 4090. The reference-design RX 7900 XTX uses conventional 8-pin PCIe power connectors, as would custom-design cards, when they come out. AMD's board partners will create units with three 8-pin power connectors, for higher out of the box performance and better OC potential. The decision to not use the 16-pin power connector that NVIDIA uses was made "well over a year ago", mostly because of cost, complexity and the fact that these Radeons don't require that much power anyway.
The reference RX 7900-series board design has the same card height as the RX 6950 XT, but is just 1 cm longer, at 28.7 cm. It is also strictly 2.5 slots thick. There's some white illuminated elements, which are controllable, using the same software as on the Radeon RX 6000 Series. Both cards feature two DisplayPort 2.1 outputs, one HDMI 2.1a and one USB-C.This is AMD's first attempt at a gaming GPU made of chiplets (multiple logic dies on a multi-chip module). The company has built MCM GPUs in the past, but those have essentially been the GPU die surrounded by HBM stacks. The new "Navi 31" GPU at the heart of the RX 7900 XTX and RX 7900 XT features seven chiplets—a central large graphics compute die (GCD), surrounded by six memory control-cache dies (MCDs). The GCD is built on the TSMC 5 nm EUV silicon fabrication process—the same one on which AMD builds its "Zen 4" CCDs—while the MCDs are each fabricated on the TSMC 6 nm process.The GCD contains the GPU's main graphics rendering machinery, including the front-end, the RDNA3 compute units, the Ray Accelerators, the display controllers, the media engine and the render backends. The GCD physically features 96 RDNA3 Unified Compute Units (CUs), for 6,144 stream processors. All 96 of these are enabled on the RX 7900 XTX. The RX 7900 XT has 84 out of 96 unified compute units enabled, which works out to 5,376 stream processors. The new RDNA3 next-generation compute unit introduces dual-issue stream processors, which essentially double their throughput generation-over-generation. This is a VLIW approach, AMD does not double the rated shader count though, so it's 6144 for the full GPU (96 CU x 64 shaders per CU, not 128 shaders per CU).Each of the six MCDs contains a 64-bit wide GDDR6 memory interface, and 16 MB of Infinity Cache memory. Six of these MCDs add up to the GPU's 384-bit wide memory interface, and 96 MB of total Infinity Cache memory. The GCD addresses the 384-bit wide memory interface as a contiguous addressable block, and not 6x 64-bit. Most modern GPUs for the past decade have had multiple on-die memory controllers making up a larger memory interface, "Navi 31" moves these to separate chiplets. This approach reduces the size of the main GCD tile, which will help with yield rates. The Radeon RX 7900 XTX is configured with 24 GB of GDDR6 memory across the chip's entire 384-bit wide memory bus, while the RX 7900 XT gets 20 GB of GDDR6 memory across a 320-bit wide memory bus (one of the MCDs is disabled). The disabled MCD isn't not "missing", but there's some dummy silicon dies there to provide stability for the cooler mounting.Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD, and a second-generation RT accelerator that provides new dedicated instructions, and a 50% performance uplift in ray tracing performance. The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA's Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture.For the RX 7900 XTX, AMD is broadly claiming an up to 70% increase in traditional raster 3D graphics performance over the previous-generation flagship RX 6950 XT at 4K Ultra HD native resolution; and an up to 60% increase in ray tracing performance. These gains should be good to catch RTX 4080, but AMD was clear that they are not targeting RTX 4090 performance, which comes at a much higher price point, too.AMD is attributing its big 54% performance/Watt generational gains to a revolutionary asynchronous clock domain technology that runs the various components on the GCD at different frequencies, to minimize power draw. This seems similar in concept to the "shader clock" on some older NVIDIA architectures.AMD also announced FSR 3.0, the latest generation of its performance enhancement, featuring Fluid Motion technology. This is functionally similar to DLSS 3 Frame Generation, promising a 100% uplift in performance at comparable quality—essentially because the GPU is generating every alternate frame without involving its graphics rendering pipeline.The new dual-independent media-acceleration engines enable simultaneous encode and decode for AVC and HEVC formats; hardware-accelerated encode and decode for AV1, and AI-accelerated enhancements. The new AMD Radiance Display Engine introduces native support for DisplayPort 2.1, with 54 Gbps display link bandwidth, and 12 bpc color. This enables resolutions of up to 8K @ 165 Hz with a single cable; or 4K @ 480 Hz with a single cable.The "Navi 31" GPU in its full configuration has a raw compute throughput of 61 TFLOPs, compared to 23 TFLOPs of the RDNA2-based Navi 21 (a 165% increase). The shader and front-end of the GPU operate at different clock speeds, with the shaders running at up to 2.30 GHz, and the front-end at up to 2.50 GHz. This decoupling has a big impact on power-savings, with AMD claiming a 25% power-saving as opposed to running both domains at the same 2.50 GHz clock.AMD claims the Radeon RX 7900 XTX to offer a 70% performance increase over the RX 6950 XT.
The complete slide-deck follows.
The reference RX 7900-series board design has the same card height as the RX 6950 XT, but is just 1 cm longer, at 28.7 cm. It is also strictly 2.5 slots thick. There's some white illuminated elements, which are controllable, using the same software as on the Radeon RX 6000 Series. Both cards feature two DisplayPort 2.1 outputs, one HDMI 2.1a and one USB-C.This is AMD's first attempt at a gaming GPU made of chiplets (multiple logic dies on a multi-chip module). The company has built MCM GPUs in the past, but those have essentially been the GPU die surrounded by HBM stacks. The new "Navi 31" GPU at the heart of the RX 7900 XTX and RX 7900 XT features seven chiplets—a central large graphics compute die (GCD), surrounded by six memory control-cache dies (MCDs). The GCD is built on the TSMC 5 nm EUV silicon fabrication process—the same one on which AMD builds its "Zen 4" CCDs—while the MCDs are each fabricated on the TSMC 6 nm process.The GCD contains the GPU's main graphics rendering machinery, including the front-end, the RDNA3 compute units, the Ray Accelerators, the display controllers, the media engine and the render backends. The GCD physically features 96 RDNA3 Unified Compute Units (CUs), for 6,144 stream processors. All 96 of these are enabled on the RX 7900 XTX. The RX 7900 XT has 84 out of 96 unified compute units enabled, which works out to 5,376 stream processors. The new RDNA3 next-generation compute unit introduces dual-issue stream processors, which essentially double their throughput generation-over-generation. This is a VLIW approach, AMD does not double the rated shader count though, so it's 6144 for the full GPU (96 CU x 64 shaders per CU, not 128 shaders per CU).Each of the six MCDs contains a 64-bit wide GDDR6 memory interface, and 16 MB of Infinity Cache memory. Six of these MCDs add up to the GPU's 384-bit wide memory interface, and 96 MB of total Infinity Cache memory. The GCD addresses the 384-bit wide memory interface as a contiguous addressable block, and not 6x 64-bit. Most modern GPUs for the past decade have had multiple on-die memory controllers making up a larger memory interface, "Navi 31" moves these to separate chiplets. This approach reduces the size of the main GCD tile, which will help with yield rates. The Radeon RX 7900 XTX is configured with 24 GB of GDDR6 memory across the chip's entire 384-bit wide memory bus, while the RX 7900 XT gets 20 GB of GDDR6 memory across a 320-bit wide memory bus (one of the MCDs is disabled). The disabled MCD isn't not "missing", but there's some dummy silicon dies there to provide stability for the cooler mounting.Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD, and a second-generation RT accelerator that provides new dedicated instructions, and a 50% performance uplift in ray tracing performance. The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA's Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture.For the RX 7900 XTX, AMD is broadly claiming an up to 70% increase in traditional raster 3D graphics performance over the previous-generation flagship RX 6950 XT at 4K Ultra HD native resolution; and an up to 60% increase in ray tracing performance. These gains should be good to catch RTX 4080, but AMD was clear that they are not targeting RTX 4090 performance, which comes at a much higher price point, too.AMD is attributing its big 54% performance/Watt generational gains to a revolutionary asynchronous clock domain technology that runs the various components on the GCD at different frequencies, to minimize power draw. This seems similar in concept to the "shader clock" on some older NVIDIA architectures.AMD also announced FSR 3.0, the latest generation of its performance enhancement, featuring Fluid Motion technology. This is functionally similar to DLSS 3 Frame Generation, promising a 100% uplift in performance at comparable quality—essentially because the GPU is generating every alternate frame without involving its graphics rendering pipeline.The new dual-independent media-acceleration engines enable simultaneous encode and decode for AVC and HEVC formats; hardware-accelerated encode and decode for AV1, and AI-accelerated enhancements. The new AMD Radiance Display Engine introduces native support for DisplayPort 2.1, with 54 Gbps display link bandwidth, and 12 bpc color. This enables resolutions of up to 8K @ 165 Hz with a single cable; or 4K @ 480 Hz with a single cable.The "Navi 31" GPU in its full configuration has a raw compute throughput of 61 TFLOPs, compared to 23 TFLOPs of the RDNA2-based Navi 21 (a 165% increase). The shader and front-end of the GPU operate at different clock speeds, with the shaders running at up to 2.30 GHz, and the front-end at up to 2.50 GHz. This decoupling has a big impact on power-savings, with AMD claiming a 25% power-saving as opposed to running both domains at the same 2.50 GHz clock.AMD claims the Radeon RX 7900 XTX to offer a 70% performance increase over the RX 6950 XT.
The complete slide-deck follows.
336 Comments on AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion
Instead of accusations, you could maybe perhaps try to explain what you want? Just saying...
I think I've explained my point pretty well, being: manufacturer benchmarks are always flawed and biased one way or another. Intel, Nvidia, AMD, it doesn't matter. Whether they give you any numbers or not, you should always wait for independent reviews before you draw conclusions. Then you just proved it by pointing out that they used RT and FSR in the launch video. Yes, they did. Yes, it's flawed. It's always been!
- Have AMD GPU announcements always been so light on actual raw data? Almost all the charts don't include raw FPS. AMD has always touted itself for transparency, now instead of raw FPS we get FSR FPS.
- FSR 3.0 implements DLSS 3.0 while there's no word on increased latency. Weird. At least NVIDIA adds NVIDIA Reflex to partially mitigate the issue. Also, vague "available in 2023" sounds like they were not ready for DLSS 3.0 but they had to counter it.
- I don't understand how to read DXR performance. Looks like AMD will again only compete with previous generation NVIDIA cards.
- Funnily AMD did not actually explain why they used/needed to use the chiplet design.
I don't give two shits about high fps 4K, 8K and other perks of being a rich 1st-world country citizen. I'm looking forward to something which costs below $350 and rivals at the very least RTX 3070.According to Steam Hardware survey cards under $350 are what is driving progress, not these expensive tech toys for the rich. I don't understand all this clamor about top-tier cards. > 95% of gamers cannot afford them, and with them you also need a top-tier CPU and quite an expensive monitor.
P.S. My next GPU will be RDNA 3.0 because I've grown tired of NVIDIA Linux drivers, NVIDIA's pricing and NVIDIA's product segmentation. The company has seemingly stopped caring about budget users.
What are they hiding and are not willing to show us?
So here is the cost / frame according to Techspot / HUB. Using their chart because they used a 5800X3D in the review and wizzard showed that at 4K there was an advantage in using that CPU over the vanilla 5800X so this chart has a bit less 4K bottlenecking for the 4090.
Given the 54% increase in perf/watt provided by AMD was 7900XTX @300W vs the 6900XT @300W we can do some funny math to get an estimate or we can just take that and apply it to the 6950XT which has slightly worse perf/watt than the 6900XT does and ignore that the 7900XTX has higher power draw than the 6950XT. This means I am going to apply a 1.54x scaling factor to the 6950XT score to estimate 7900XTX performance in this suite of games. Given AMD showed 3 games that averaged 1.57x more performance in raster it seems fair enough without being overly pessimistic or optimistic.
So with that out of the way the 7900XTX would get an estimated 131 fps in the above suite. The 4080 looks to be about 20% ahead of the 3090Ti according to the NV charts they showed (which while only 3 games seemed to be ballpark where the 4090 raster performance improvement landed so not cherry picked by the looks of it) giving it an estimated 109 fps in the above. This is all raster performance obviously. Anyway to get to the point it gives us the following
4090 cost / frame = $11.11
4080 cost / frame = $11.01
7900XTX cost / frame = $7.63
Quite an advantage for AMD there, even vs the price reduced current gen stuff.
What about RT though. Well going through the techspot numbers the 4090 has a 4k native RT scaling factor of 0.46x. The 3090Ti has a scaling factor of 0.42x and the 6950XT had a scaling factor of 0.31x. I will use 0.46x for the 4080 and 0.31x for the 7900XTX. Actual numbers may be worse given how cut down the 4080 is and that the RT scaling for the 7900XTX looked to scale worse than than the raster improvement but it is the best estimate we have. Anyway that ends up with the following.
4090 RT cost / frame $24.24
4080 RT cost / frame $24.00
7900XTX RT cost / frame $24.37
3090Ti RT cost / frame $28.95
So 7900XTX is priced about inline with the RT performance of the 4k series but offers a large raster perf/$ advantage. The 4090 does and the 4080 looks like it will offer better absolute RT performance and no real premium so it looks to me like we as customers have options based on our wants which is always nice.
This does leave an opening for the 4070 to actually offer the best RT bang for buck performance if priced right. If you take raster perf of that card to be about 3080Ti and the 4090 scaling factor it looks like performance will be a bit worse than the 7900XTX but if priced at $600-700 it would be better RT perf/$ than the AMD cards but worse raster perf/$. At that price point 7800XT vs 4070 could easily be a case of go 4070 if you want better RT performance or go 7800XT if you want better raster performance.
EDIT: Picture does not seem to display after posting, link added as well incase issue is not just my side
EDIT2: Thank TheLostSwede,
And I don't mean the knocked out rival:
www.techpowerup.com/forums/threads/nvidia-cancels-geforce-rtx-4080-12gb-to-relaunch-it-with-a-different-name.299859/page-14#post-4858912
:roll:
I mean the remaining one, that overpriced piece of something called 4080.
:D
PS
AMD's statements applied to TPU charts at 4k:
I dont need a 4090. I dont even want one. I need a solid, not too power hungry GPU that runs games proper at a good price/perf. Fuck RT. Honestly; and similar things apply to other perceived value where there is none (DLSS3). And that is all NV wrote this gen. Im not missing a thing going Red... only overinflated marketing to keep DOA tech afloat.
Also TPU benchmarks are flawed since Wizzard used the 5800x. There's almost a 7% difference between the 5800x and the 12900k at 4k
www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-core-i9-12900k/2.html
It's clear as day how many amd fanboys there are can't even see how sus the whole presentation was. I've shown you proof that in the last 5 years they've ALWAYS shown their cherrypicked benchmarks. But ok
Is it forbidden to have an opinion on what AMD presented that differs from the YT echo chamber? I smell sheep
Figures. :roll: Oh. And that is relevant in this thread, because? :D
Because AMD fanboys love to hype the shit out of AMD's products.
RDNA3 was supposed to show us amazing leaps in performance yet it hasn't. Seems like RDNA2 was it.
I am against the big green troll and to feed him, so I am not giving a coin to nvidia.
Now you may ask yourself why they didn’t compare against 3xxx series from nvidia ? Because perception. If they did that then folks would take it as AMD competing with 3000 series instead of 4000 series. Maybe not folks on this forum but those who are new to the hobby
It’s all just one big marketing dick measuring contest and nothing new here
Wish I had some popcorn.
I like the more realistic pricing provided by AMD here, but I'm not thrilled about pricing still being so high when compared to just a few generations back. I know technology advances and that prices go up (inflation, scarcity, demand, wage increases, etc), but when we were all seeing high end cards such as the 980Ti at MSRP of $650 or the AMD R9 Fury X MSRP of $649.
How about the GTX 1080 MSRP of $599 (and the 1080Ti MSRP of $$699) or the Vega 64 MSRP of $499.
Oh well, I guess I just dwell on the old pricing of better days and keep hoping things will settle down more. At least AMD isn't trying to rake people over the coals with extreme pricing like Nvidia is doing now.
I truly hope the 7900 cards put's Nvidia to shame. Even if they can't quite match the 4090, but if they can kick the crap out of the 4080 16GB and do it for $200-300 less, that would be awesome.
I understand this is the reality now. Chips will get more expensive as they get more advanced from here on out. Moore's Law is dead (again, but in a different way).
With the same success I can tell you that RX 7900 XTX will be faster than RTX 4090 and RTX 4090 needs a 50% price reduction, otherwise no one will buy it except the diehard core nvidia fanboys/girls.