Thursday, November 3rd 2022

AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

AMD today announced the Radeon RX 7900 XTX and Radeon RX 7900 XT gaming graphics cards debuting its next-generation RDNA3 graphics architecture. The two new cards come at $999 and $899—basically targeting the $1000 high-end premium price point.
Both cards will be available on December 13th, not only the AMD reference design, which is sold through AMD.com, but also custom-design variants from the many board partners on the same day. AIBs are expected to announce their products in the coming weeks.

The RX 7900 XTX is priced at USD $999, and the RX 7900 XT is $899, which is a surprisingly small difference of only $100, for a performance difference that will certainly be larger, probably in the 20% range. Both Radeon RX 7900 XTX and RX 7900 XT are using the PCI-Express 4.0 interface, Gen 5 is not supported with this generation. The RX 7900 XTX has a typical board power of 355 W, or about 95 W less than that of the GeForce RTX 4090. The reference-design RX 7900 XTX uses conventional 8-pin PCIe power connectors, as would custom-design cards, when they come out. AMD's board partners will create units with three 8-pin power connectors, for higher out of the box performance and better OC potential. The decision to not use the 16-pin power connector that NVIDIA uses was made "well over a year ago", mostly because of cost, complexity and the fact that these Radeons don't require that much power anyway.

The reference RX 7900-series board design has the same card height as the RX 6950 XT, but is just 1 cm longer, at 28.7 cm. It is also strictly 2.5 slots thick. There's some white illuminated elements, which are controllable, using the same software as on the Radeon RX 6000 Series. Both cards feature two DisplayPort 2.1 outputs, one HDMI 2.1a and one USB-C.
This is AMD's first attempt at a gaming GPU made of chiplets (multiple logic dies on a multi-chip module). The company has built MCM GPUs in the past, but those have essentially been the GPU die surrounded by HBM stacks. The new "Navi 31" GPU at the heart of the RX 7900 XTX and RX 7900 XT features seven chiplets—a central large graphics compute die (GCD), surrounded by six memory control-cache dies (MCDs). The GCD is built on the TSMC 5 nm EUV silicon fabrication process—the same one on which AMD builds its "Zen 4" CCDs—while the MCDs are each fabricated on the TSMC 6 nm process.

The GCD contains the GPU's main graphics rendering machinery, including the front-end, the RDNA3 compute units, the Ray Accelerators, the display controllers, the media engine and the render backends. The GCD physically features 96 RDNA3 Unified Compute Units (CUs), for 6,144 stream processors. All 96 of these are enabled on the RX 7900 XTX. The RX 7900 XT has 84 out of 96 unified compute units enabled, which works out to 5,376 stream processors. The new RDNA3 next-generation compute unit introduces dual-issue stream processors, which essentially double their throughput generation-over-generation. This is a VLIW approach, AMD does not double the rated shader count though, so it's 6144 for the full GPU (96 CU x 64 shaders per CU, not 128 shaders per CU).

Each of the six MCDs contains a 64-bit wide GDDR6 memory interface, and 16 MB of Infinity Cache memory. Six of these MCDs add up to the GPU's 384-bit wide memory interface, and 96 MB of total Infinity Cache memory. The GCD addresses the 384-bit wide memory interface as a contiguous addressable block, and not 6x 64-bit. Most modern GPUs for the past decade have had multiple on-die memory controllers making up a larger memory interface, "Navi 31" moves these to separate chiplets. This approach reduces the size of the main GCD tile, which will help with yield rates. The Radeon RX 7900 XTX is configured with 24 GB of GDDR6 memory across the chip's entire 384-bit wide memory bus, while the RX 7900 XT gets 20 GB of GDDR6 memory across a 320-bit wide memory bus (one of the MCDs is disabled). The disabled MCD isn't not "missing", but there's some dummy silicon dies there to provide stability for the cooler mounting.

Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD, and a second-generation RT accelerator that provides new dedicated instructions, and a 50% performance uplift in ray tracing performance. The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA's Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture.
For the RX 7900 XTX, AMD is broadly claiming an up to 70% increase in traditional raster 3D graphics performance over the previous-generation flagship RX 6950 XT at 4K Ultra HD native resolution; and an up to 60% increase in ray tracing performance. These gains should be good to catch RTX 4080, but AMD was clear that they are not targeting RTX 4090 performance, which comes at a much higher price point, too.
AMD is attributing its big 54% performance/Watt generational gains to a revolutionary asynchronous clock domain technology that runs the various components on the GCD at different frequencies, to minimize power draw. This seems similar in concept to the "shader clock" on some older NVIDIA architectures.
AMD also announced FSR 3.0, the latest generation of its performance enhancement, featuring Fluid Motion technology. This is functionally similar to DLSS 3 Frame Generation, promising a 100% uplift in performance at comparable quality—essentially because the GPU is generating every alternate frame without involving its graphics rendering pipeline.
The new dual-independent media-acceleration engines enable simultaneous encode and decode for AVC and HEVC formats; hardware-accelerated encode and decode for AV1, and AI-accelerated enhancements. The new AMD Radiance Display Engine introduces native support for DisplayPort 2.1, with 54 Gbps display link bandwidth, and 12 bpc color. This enables resolutions of up to 8K @ 165 Hz with a single cable; or 4K @ 480 Hz with a single cable.
The "Navi 31" GPU in its full configuration has a raw compute throughput of 61 TFLOPs, compared to 23 TFLOPs of the RDNA2-based Navi 21 (a 165% increase). The shader and front-end of the GPU operate at different clock speeds, with the shaders running at up to 2.30 GHz, and the front-end at up to 2.50 GHz. This decoupling has a big impact on power-savings, with AMD claiming a 25% power-saving as opposed to running both domains at the same 2.50 GHz clock.
AMD claims the Radeon RX 7900 XTX to offer a 70% performance increase over the RX 6950 XT.

The complete slide-deck follows.
Add your own comment

336 Comments on AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

#76
ymdhis
I'll be waiting for the rest of the lineup, and some benchmarks, before I can judge this. It looks good on paper, I love the idea of using chiplets, especially for GPUs which are traditionally extremely huge. Could be a winner, but it's too early to tell.
TossLove it. No bells and whisles, just raw performance, without compromises with sane price.
These prices are not sane. I remember when I could buy a card of this caliber for less than a third of the price.
edit: didn't factor in the euro tax. That makes it less than a third, not just less than half.
Posted on Reply
#77
Ravenas
oxrufiioxoThey are so far ahead in mindshare they can afford to price much higher than AMD.

If amd could price their cards the same as Nvidia they would. They are not giving us a 7900XTX at 1000 usd out of the kindness of their hearts.

Look at their cpu division the minute they caught up to intel prices went to $h1+.

The same will happen if they ever have overall parity with nvidia on the gpu side of things.
AMD isn't competing in the "ultra high end".
Posted on Reply
#78
Calmmo
Napkin theory math says this 999$ gpu kills the 4080 at its current price
Posted on Reply
#79
btk2k2
mahoneyWhat's the reason for them not showing benchmarks vs the 4090 or at least the 3090ti? They really enjoyed showcasing the 6800 vs nvidia gpu's.
IF they use the 3090Ti people will ask why not use the 4090. Given the 4090 has a 60% price premium they are not really targeting the same market and there is no 4080 16GB to compare against at a closer price point so they just compared to their old flagship.
Posted on Reply
#80
Crackong
trsttteAhah, they already announced that a couple weeks ago, right after the first problems with the 12pin connector appeared
The actual cards should be made way ahead of the 12vhpwr problem surfaces..
I mean the AMD PR team and the 3D animators caught up with the news rather quickly and decided to re-render the 3D cuts and powerpoint slides to make this a selling point :)
I am impressed with their efficiency catching up with news.
Posted on Reply
#81
Zach_01
CallandorWoTi think he is asking can you find those settings you just mentioned in the radeon driver, so where do you find "adaptiveAA w/ supersampling"? id like to know as well for older games.
Posted on Reply
#82
EatingDirt
mahoneyWhat's the reason for them not showing benchmarks vs the 4090 or at least the 3090ti? They really enjoyed showcasing the 6800 vs nvidia gpu's.
It's pretty obvious. The numbers they provided were 50-70% faster than their current gen. That makes then 7900 XTX around 0-20% slower (depending on the game) in pure rasterization in games than the 4090. I expect they would be comparing it to the 4080, if it was out.

This was first and foremost a marketing presentation and at this time there's just nothing from Nvidia at $900-1000 price range that makes for a good showcase, in terms of performance, of their GPU's.
Posted on Reply
#83
freeagent
Well... I might get one. I will just get in trouble... but I can always say I saved a thousand bucks by not going with Nvidia lol...
Posted on Reply
#84
AlwaysHope
Nice toy(s) to have, pity there aren't any AAA games atm that motivate me enough to upgrade.
Perhaps with Starfield next year.... but still too early to predict performance demands in that title.
Posted on Reply
#86
Upgrayedd
TheinsanegamerNBecvause a graphics bell and whistle that tanks your framerate while producing effects so minute that even with still frame people still cant see the difference is totally worthless to everyone but specwhores. To those of us that PLAY games, RT is functionally useless except for Ambient Occlusion, which can be done far easier with shader tricks without requiring the power of a nuclear sub to operate. The numebr of newer games coming out with impressive lighting effects that dont need any form of RT should indicate that RT, at least in its current form, will go down the same road as Hairworks and Physx.
I mean there's plenty of arguments in there. SSR is far more jarring than RT reflections. I don't really see RT going anywhere. Both major players are heavily invested into RT now.
Posted on Reply
#87
wolf
Performance Enthusiast
Need @W1zzard's review already, I think we'll see a lot of variance vs the 4090 depending on the game tested.

Also keen to hear more about this FSR 3.0 FluidMotion with frame generation, what will it work on, how good is it, can we piggyback games that have DLSS3? so many questions.
Posted on Reply
#88
GunShot
So, basically, AMD just did a switch-a-roo naming-scheme here, in my opinion, huh?

Actually 7900xt = now a 7900xtx

Actually 7800xt = now a 7900xt

"It would look weird and it will expose us for charging $899 just for our 7800xt but, hey, let's do an NVIDIA but... a tad less... and our fans will defend is... Yeah!... the Board!" :laugh:
Posted on Reply
#90
Zubasa
thesmokingmanNvidia price cuts... will they do it?
Not until Ampere stocks runs out.
Posted on Reply
#91
toilet pepper
I just realized this. It has AV1 which you could use for OBS streaming. NVENC was the only other thing that Nvidia had an upper hand on.
Posted on Reply
#92
THU31
medi01
This average includes games where the 4090 is CPU-bound even in 4K, so the XTX will be affected by that as well.

But it could be very close in AMD-optimized titles. It should definitely win performance per dollar (in rasterization). Performance per watt might be similar, as the 4090 is surprisingly efficient.
Posted on Reply
#93
Fluffmeister
Well played Nvidia, people now see $899 to $999 as great value.
Posted on Reply
#94
kapone32
freeagentWell, that 999 dollar card will be about 1537 here.

Thats a hard pass. Gonna have to go back to my other hobbies since I am being priced out of this one.
Don't say that. Hopefully AMD will be good to us Canadians and allow us to get it from their website for MSRP plus conversion. I don't want to see Mining premiums applied to this card. I am going to Memory Express tomorrow to talk to my friend.
Posted on Reply
#95
Prima.Vera
TossMonitors will follow. Imagine buying RTX 4090 and get stuck with 4k144 forever.
My Overwatch 2 is running 600 fps at 4k, and sorry bruh you have to play 144 hz on your horsegarbo nvidia. What is this.
Not sure if you're trolling, or you really are serious with this total nonsense
Posted on Reply
#96
Akkedie
So, AMD even more hopelessly behind Nvidia in RT than last generation. Baffling! Guess I'll buy Nvidia for the first time.
thesmokingmanNvidia price cuts... will they do it?
Imo the way they will do it is when they do a refresh line-up with Supers, just like they did for Turing.
Posted on Reply
#97
TheinsanegamerN
FluffmeisterWell played Nvidia, people now see $899 to $999 as great value.
The GTX 690 was $1000 in 2012

The GTX 590 was $1000 in 2010

The GTX 8800 ultra was $830 in 2007, adjusted for inflation over $1100.

Halo cards hitting 4 figures is not a new thing.

In the context of halo cards, the 7900 xtx is a "great value", in that it is substantially cheaper then the 4090 while likely not being that much slower, if not occasionally faster.
Posted on Reply
#98
Fluffmeister
TheinsanegamerNThe GTX 690 was $1000 in 2012

The GTX 590 was $1000 in 2010

The GTX 8800 ultra was $830 in 2007, adjusted for inflation over $1100.

Halo cards hitting 4 figures is not a new thing.

In the context of halo cards, the 7900 xtx is a "great value", in that it is substantially cheaper then the 4090 while likely not being that much slower, if not occasionally faster.
Well indeed, yet something like $699 GTX 1080 TI was milking the market, and $1000 Titan was an abomination confirming their greed.

How times have changed.
Posted on Reply
#99
TheinsanegamerN
FluffmeisterWell indeed, yet something like $699 GTX 1080 TI was milking the market, and $1000 Titan was an abomination confirming their greed.

How times have changed.
Honestly I think people have rose colored glasses for cheaper GPUs of the turn of the decade, often forgetting the cheap hardware was a 1-2 punch of the Great Recession hampering prices of, well, everything, and the stagnating console market resulting in no need for high end GPUs, games were so held back by the 360/ps3 by 2010 that even a 550ti, OCed, could outdo consoles in detail, resulton, and framerate, often by significant margins.
Posted on Reply
#100
Space Lynx
Astronaut
TheinsanegamerNHonestly I think people have rose colored glasses for cheaper GPUs of the turn of the decade, often forgetting the cheap hardware was a 1-2 punch of the Great Recession hampering prices of, well, everything, and the stagnating console market resulting in no need for high end GPUs, games were so held back by the 360/ps3 by 2010 that even a 550ti, OCed, could outdo consoles in detail, resulton, and framerate, often by significant margins.
I think $899 is a fair price for the 7800 XT. I mean it's a literally a game changer for someone who like me who is just sick of playing high end AAA games at 1440p at 70 fps ish... when all we want is to play them at 165 fps 1440p. I know that sounds stupid as fuck, but I really do like 165 fps 165hz gaming, so eh, it is what it is. :toast:

I'm set for a long time now, assuming I can beat the bots and get one December 13th. Doubt if I can. :(
Posted on Reply
Add your own comment
May 4th, 2024 07:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts