Thursday, November 3rd 2022

AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

AMD today announced the Radeon RX 7900 XTX and Radeon RX 7900 XT gaming graphics cards debuting its next-generation RDNA3 graphics architecture. The two new cards come at $999 and $899—basically targeting the $1000 high-end premium price point.
Both cards will be available on December 13th, not only the AMD reference design, which is sold through AMD.com, but also custom-design variants from the many board partners on the same day. AIBs are expected to announce their products in the coming weeks.

The RX 7900 XTX is priced at USD $999, and the RX 7900 XT is $899, which is a surprisingly small difference of only $100, for a performance difference that will certainly be larger, probably in the 20% range. Both Radeon RX 7900 XTX and RX 7900 XT are using the PCI-Express 4.0 interface, Gen 5 is not supported with this generation. The RX 7900 XTX has a typical board power of 355 W, or about 95 W less than that of the GeForce RTX 4090. The reference-design RX 7900 XTX uses conventional 8-pin PCIe power connectors, as would custom-design cards, when they come out. AMD's board partners will create units with three 8-pin power connectors, for higher out of the box performance and better OC potential. The decision to not use the 16-pin power connector that NVIDIA uses was made "well over a year ago", mostly because of cost, complexity and the fact that these Radeons don't require that much power anyway.

The reference RX 7900-series board design has the same card height as the RX 6950 XT, but is just 1 cm longer, at 28.7 cm. It is also strictly 2.5 slots thick. There's some white illuminated elements, which are controllable, using the same software as on the Radeon RX 6000 Series. Both cards feature two DisplayPort 2.1 outputs, one HDMI 2.1a and one USB-C.
This is AMD's first attempt at a gaming GPU made of chiplets (multiple logic dies on a multi-chip module). The company has built MCM GPUs in the past, but those have essentially been the GPU die surrounded by HBM stacks. The new "Navi 31" GPU at the heart of the RX 7900 XTX and RX 7900 XT features seven chiplets—a central large graphics compute die (GCD), surrounded by six memory control-cache dies (MCDs). The GCD is built on the TSMC 5 nm EUV silicon fabrication process—the same one on which AMD builds its "Zen 4" CCDs—while the MCDs are each fabricated on the TSMC 6 nm process.

The GCD contains the GPU's main graphics rendering machinery, including the front-end, the RDNA3 compute units, the Ray Accelerators, the display controllers, the media engine and the render backends. The GCD physically features 96 RDNA3 Unified Compute Units (CUs), for 6,144 stream processors. All 96 of these are enabled on the RX 7900 XTX. The RX 7900 XT has 84 out of 96 unified compute units enabled, which works out to 5,376 stream processors. The new RDNA3 next-generation compute unit introduces dual-issue stream processors, which essentially double their throughput generation-over-generation. This is a VLIW approach, AMD does not double the rated shader count though, so it's 6144 for the full GPU (96 CU x 64 shaders per CU, not 128 shaders per CU).

Each of the six MCDs contains a 64-bit wide GDDR6 memory interface, and 16 MB of Infinity Cache memory. Six of these MCDs add up to the GPU's 384-bit wide memory interface, and 96 MB of total Infinity Cache memory. The GCD addresses the 384-bit wide memory interface as a contiguous addressable block, and not 6x 64-bit. Most modern GPUs for the past decade have had multiple on-die memory controllers making up a larger memory interface, "Navi 31" moves these to separate chiplets. This approach reduces the size of the main GCD tile, which will help with yield rates. The Radeon RX 7900 XTX is configured with 24 GB of GDDR6 memory across the chip's entire 384-bit wide memory bus, while the RX 7900 XT gets 20 GB of GDDR6 memory across a 320-bit wide memory bus (one of the MCDs is disabled). The disabled MCD isn't not "missing", but there's some dummy silicon dies there to provide stability for the cooler mounting.

Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD, and a second-generation RT accelerator that provides new dedicated instructions, and a 50% performance uplift in ray tracing performance. The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA's Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture.
For the RX 7900 XTX, AMD is broadly claiming an up to 70% increase in traditional raster 3D graphics performance over the previous-generation flagship RX 6950 XT at 4K Ultra HD native resolution; and an up to 60% increase in ray tracing performance. These gains should be good to catch RTX 4080, but AMD was clear that they are not targeting RTX 4090 performance, which comes at a much higher price point, too.
AMD is attributing its big 54% performance/Watt generational gains to a revolutionary asynchronous clock domain technology that runs the various components on the GCD at different frequencies, to minimize power draw. This seems similar in concept to the "shader clock" on some older NVIDIA architectures.
AMD also announced FSR 3.0, the latest generation of its performance enhancement, featuring Fluid Motion technology. This is functionally similar to DLSS 3 Frame Generation, promising a 100% uplift in performance at comparable quality—essentially because the GPU is generating every alternate frame without involving its graphics rendering pipeline.
The new dual-independent media-acceleration engines enable simultaneous encode and decode for AVC and HEVC formats; hardware-accelerated encode and decode for AV1, and AI-accelerated enhancements. The new AMD Radiance Display Engine introduces native support for DisplayPort 2.1, with 54 Gbps display link bandwidth, and 12 bpc color. This enables resolutions of up to 8K @ 165 Hz with a single cable; or 4K @ 480 Hz with a single cable.
The "Navi 31" GPU in its full configuration has a raw compute throughput of 61 TFLOPs, compared to 23 TFLOPs of the RDNA2-based Navi 21 (a 165% increase). The shader and front-end of the GPU operate at different clock speeds, with the shaders running at up to 2.30 GHz, and the front-end at up to 2.50 GHz. This decoupling has a big impact on power-savings, with AMD claiming a 25% power-saving as opposed to running both domains at the same 2.50 GHz clock.
AMD claims the Radeon RX 7900 XTX to offer a 70% performance increase over the RX 6950 XT.

The complete slide-deck follows.
Add your own comment

336 Comments on AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

#51
trsttte
CallandorWoTyou wont be able to buy. bots and third party scalpers will get them all. wait and see.
Just wait for stock to stabilize, prices will drop

edit: yeah, if you want to build right now it's tough, but waiting out first wave is due to save money and stress
Crackong2 x 8pin
They actually pointed that out as a selling point..and made me laughed
They do catched up with news.
Ahah, they already announced that a couple weeks ago, right after the first problems with the 12pin connector appeared
Posted on Reply
#52
3211
AnarchoPrimitiv70% performance increase for the same price sounds good....at $1699 MSRP do people think the 4090 is 70% faster than the 7900xtx to justify the 70% increase in price?
All the games in the slide have 50% increase except Cyberpunk. That means its a 50% increase, not 70. The 4090 is 1600, not 1700
Posted on Reply
#53
freeagent
Well, that 999 dollar card will be about 1537 here.

Thats a hard pass. Gonna have to go back to my other hobbies since I am being priced out of this one.
Posted on Reply
#54
Space Lynx
Astronaut
freeagentWell, that 999 dollar card will be about 1537 here.

Thats a hard pass. Gonna have to go back to my other hobbies since I am being priced out of this one.
well said good sir.

yep. my gtx 1070 laptop doesn't do it for me, so i mostly just read and go for nature walks this year. its been enlightening. to realize that these are just hobbies at end of day. which is one reason im eccentric with them sometimes, cause a little fun is what its all about lmao
Posted on Reply
#55
VeqIR
I’m officially excited. Most likely going full AMD for my next build (hopefully 7800X3D will be out at that point), after using intel for over a decade and nVidia desktop for that long. My secondary desktop (made for a family member) currently has an older AMD card, and I vastly prefer the Radeon Software postprocessing options and results to nVidia Experience overlay options too.
Posted on Reply
#56
ModEl4
btk2k2Where is endnote 816? seems to be missing for the perf/watt claim.
RX-816 – Based on AMD internal analysis November 2022, on a system configured with a Radeon RX 7900 XTX GPU, driver 31.0.14000.24040, AMD Ryzen 9 5900X CPU, 32 GBDDR4-7200MHz, ROG CROSSHAIR VIII HERO (WI-FI) motherboard, set to 300W TBP, on Win10 Pro, versus a similarly configured test system with a 300W Radeon 6900 XT GPU and driver 31.0.12019.16007. System manufacturers may vary configurations, yielding different results.
Posted on Reply
#57
Zach_01
freeagentWell, that 999 dollar card will be about 1537 here.

Thats a hard pass. Gonna have to go back to my other hobbies since I am being priced out of this one.
And how much for the nVidia 1600 USD card?
Posted on Reply
#58
Space Lynx
Astronaut
can my EVGA GD 700w psu handle a 13600k (stock, I won't be overclocking, in fact i may do a very light undervolt) and a 7900 xt? i know it prob cant handle the xtx. but if i opt for the xt i should be ok ya?
Posted on Reply
#59
HTC
This aged REALLY WELL.

Now, we'll need to see if the flagship really trades blows with 4090, @ least in raster.
Posted on Reply
#60
freeagent
Zach_01And how much for the nVidia 1600 USD card?
About $2506 :laugh:

That is the price for conversion, as well as 14% tax.. brutal man.
Posted on Reply
#61
btk2k2
ModEl4RX-816 – Based on AMD internal analysis November 2022, on a system configured with a Radeon RX 7900 XTX GPU, driver 31.0.14000.24040, AMD Ryzen 9 5900X CPU, 32 GBDDR4-7200MHz, ROG CROSSHAIR VIII HERO (WI-FI) motherboard, set to 300W TBP, on Win10 Pro, versus a similarly configured test system with a 300W Radeon 6900 XT GPU and driver 31.0.12019.16007. System manufacturers may vary configurations, yielding different results.
they went sneaky on that. Disappointing.
Posted on Reply
#62
Zubasa
btk2k2they went sneaky on that. Disappointing.
Not really, the reference 6900XT is a 300W card.
If they compare to the 375W 6950XT it would actuallly have made AMD looked better because the 6950XT is out of its efficiency curve.
Posted on Reply
#63
Nkd
pat-ronerKinda crazy how little I care anout raytracing. Sure it looks nice, but in the game where I even notice, I don't really need competative FPS and stable 60 is enough
I feel the same way. I have a 4090. My eyes are like blind to it. I notice hdr more than ray tracing.
Posted on Reply
#64
TheDeeGee
I wouldn't actually mind getting an AMD GPU, but is there a way to have access to stuff like this on the AMD side?

Because i often use that for older games, from 2010 and before.


Posted on Reply
#65
pavle
TheDeeGeeDoes AMD have something similar to Nvidia in terms of Nvidia Inspector? To access hidden Anti-Aliasing options for older games (2010 and older) such as Sparse Grid Supersampling, 4X4 SS etc, cuz if so i may move to AMD instead.
There used to be RadeonMod, but you don't have to worry, ATi has the best AA in the industry (at least for older games), AF is a bit worse than nvidia's but still good. Just set it to 4xAAA (adaptiveAA w/ supersampling) with 8xAF or better - 8xAAA with 16AF. Smooooothvision. :)
Posted on Reply
#66
btk2k2
ZubasaNot really, the reference 6900XT is a 300W card.
If they compare to the 375W 6950XT it would actuallly have made AMD looked better because the 6950XT is out of its efficiency curve.
The reference 6950XT is 335W. The AIB 6950XTs are 375W.

Perf/w for reference 6950XT is about the same as the 6900XT, maybe a fraction worse.

The sneaky part is that they reduced the TBP of the test 7900XTX to 300W rather than have that run at stock settings which is what AMD have done in prior launches when measuring perf/watt
Posted on Reply
#67
Vayra86
Imagine what's going to appear below this 899 price point. Its looking like I'm about to go for a dye change.

:rockout:
Posted on Reply
#68
Space Lynx
Astronaut
pavleThere used to be RadeonMod, but you don't have to worry, ATi has the best AA in the industry (at least for older games), AF is a bit worse than nvidia's but still good. Just set it to 4xAAA (adaptiveAA w/ supersampling) with 8xAF or better - 8xAAA with 16AF. Smooooothvision. :)
i think he is asking can you find those settings you just mentioned in the radeon driver, so where do you find "adaptiveAA w/ supersampling"? id like to know as well for older games.
Posted on Reply
#69
ModEl4
Where is the 4GHz potential indication?
2.3GHz game clock for RX7900XTX and 2.0GHz game clock for RX7900XT on 5nm vs 2.61GHz game clock for RX6500XT on 6nm and 2.5GHz game clock for RX6750XT on 7nm.
It seems unlikely the RX 7900XTX on air highly OC models to be capable to hit more than 3GHz/2.8GHz regarding fronted/shader clocks.
Up to 1.7X vs 6950X at 4K and up to 1.6X vs 6950X with raytracing doesn't mean 1.7X on average and probably doesn't even mean 1.6X either for average 4K raster difference.
(Probably can be 1.6X with specific game testbed selection and CPU, but not on current TPU 5800X games testbed)
RX7900XT will be slower than RTX 4090 in 4K but at $999 it doesn't matter, great value (relatively speaking).
A little bit less value for 7900XT since the difference should be around 15% between them.
They will pressure price-wise the higher Ampere lineup for sure and all the cards from Nvidia & AMD will drop a little gradually but it will affect less and less as you go down to the lower priced models till it won't have an effect anymore.
Posted on Reply
#70
mahoney
What's the reason for them not showing benchmarks vs the 4090 or at least the 3090ti? They really enjoyed showcasing the 6800 vs nvidia gpu's.
Posted on Reply
#71
Psychoholic
Might be the first time i go RADEON since my Radeon 7970 Ghz edition!
Posted on Reply
#72
Space Lynx
Astronaut
PsychoholicMight be the first time i go RADEON since my Radeon 7970 Ghz edition!
those were some good times. i loved my 7950 and 7970.

and my 6950 before that. good times indeed
Posted on Reply
#73
TheinsanegamerN
UpgrayeddWhy fuck RT?
Becvause a graphics bell and whistle that tanks your framerate while producing effects so minute that even with still frame people still cant see the difference is totally worthless to everyone but specwhores. To those of us that PLAY games, RT is functionally useless except for Ambient Occlusion, which can be done far easier with shader tricks without requiring the power of a nuclear sub to operate. The numebr of newer games coming out with impressive lighting effects that dont need any form of RT should indicate that RT, at least in its current form, will go down the same road as Hairworks and Physx.
freeagentWell, that 999 dollar card will be about 1537 here.

Thats a hard pass. Gonna have to go back to my other hobbies since I am being priced out of this one.
Why not just....not go for the halo card? I've been building gaming PCs for over 15 years at this point, and never have I owned the big dog card, with the exception for the vega 64s bought for $250 used. The upper mid end has been the best bang-for-buck for as long as I can remember.
Posted on Reply
#74
DeathtoGnomes
Well all know Nvidia elevated the pricing for its 40xx's to increase the sales of its 30xx surplus stock. AMD coming in at this price at such slight performance difference puts a lot of pressure on Nvidia sales. As someone already said, "bring on the price wars!".
Posted on Reply
#75
Psychoholic
Decoupled shader clocks.. Thats a throwback reminding me of back in the day when you could overclock the shader and core clocks separately on geforce.
Posted on Reply
Add your own comment
Nov 23rd, 2024 06:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts