Thursday, November 3rd 2022

AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

AMD today announced the Radeon RX 7900 XTX and Radeon RX 7900 XT gaming graphics cards debuting its next-generation RDNA3 graphics architecture. The two new cards come at $999 and $899—basically targeting the $1000 high-end premium price point.
Both cards will be available on December 13th, not only the AMD reference design, which is sold through AMD.com, but also custom-design variants from the many board partners on the same day. AIBs are expected to announce their products in the coming weeks.

The RX 7900 XTX is priced at USD $999, and the RX 7900 XT is $899, which is a surprisingly small difference of only $100, for a performance difference that will certainly be larger, probably in the 20% range. Both Radeon RX 7900 XTX and RX 7900 XT are using the PCI-Express 4.0 interface, Gen 5 is not supported with this generation. The RX 7900 XTX has a typical board power of 355 W, or about 95 W less than that of the GeForce RTX 4090. The reference-design RX 7900 XTX uses conventional 8-pin PCIe power connectors, as would custom-design cards, when they come out. AMD's board partners will create units with three 8-pin power connectors, for higher out of the box performance and better OC potential. The decision to not use the 16-pin power connector that NVIDIA uses was made "well over a year ago", mostly because of cost, complexity and the fact that these Radeons don't require that much power anyway.

The reference RX 7900-series board design has the same card height as the RX 6950 XT, but is just 1 cm longer, at 28.7 cm. It is also strictly 2.5 slots thick. There's some white illuminated elements, which are controllable, using the same software as on the Radeon RX 6000 Series. Both cards feature two DisplayPort 2.1 outputs, one HDMI 2.1a and one USB-C.
This is AMD's first attempt at a gaming GPU made of chiplets (multiple logic dies on a multi-chip module). The company has built MCM GPUs in the past, but those have essentially been the GPU die surrounded by HBM stacks. The new "Navi 31" GPU at the heart of the RX 7900 XTX and RX 7900 XT features seven chiplets—a central large graphics compute die (GCD), surrounded by six memory control-cache dies (MCDs). The GCD is built on the TSMC 5 nm EUV silicon fabrication process—the same one on which AMD builds its "Zen 4" CCDs—while the MCDs are each fabricated on the TSMC 6 nm process.

The GCD contains the GPU's main graphics rendering machinery, including the front-end, the RDNA3 compute units, the Ray Accelerators, the display controllers, the media engine and the render backends. The GCD physically features 96 RDNA3 Unified Compute Units (CUs), for 6,144 stream processors. All 96 of these are enabled on the RX 7900 XTX. The RX 7900 XT has 84 out of 96 unified compute units enabled, which works out to 5,376 stream processors. The new RDNA3 next-generation compute unit introduces dual-issue stream processors, which essentially double their throughput generation-over-generation. This is a VLIW approach, AMD does not double the rated shader count though, so it's 6144 for the full GPU (96 CU x 64 shaders per CU, not 128 shaders per CU).

Each of the six MCDs contains a 64-bit wide GDDR6 memory interface, and 16 MB of Infinity Cache memory. Six of these MCDs add up to the GPU's 384-bit wide memory interface, and 96 MB of total Infinity Cache memory. The GCD addresses the 384-bit wide memory interface as a contiguous addressable block, and not 6x 64-bit. Most modern GPUs for the past decade have had multiple on-die memory controllers making up a larger memory interface, "Navi 31" moves these to separate chiplets. This approach reduces the size of the main GCD tile, which will help with yield rates. The Radeon RX 7900 XTX is configured with 24 GB of GDDR6 memory across the chip's entire 384-bit wide memory bus, while the RX 7900 XT gets 20 GB of GDDR6 memory across a 320-bit wide memory bus (one of the MCDs is disabled). The disabled MCD isn't not "missing", but there's some dummy silicon dies there to provide stability for the cooler mounting.

Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD, and a second-generation RT accelerator that provides new dedicated instructions, and a 50% performance uplift in ray tracing performance. The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA's Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture.
For the RX 7900 XTX, AMD is broadly claiming an up to 70% increase in traditional raster 3D graphics performance over the previous-generation flagship RX 6950 XT at 4K Ultra HD native resolution; and an up to 60% increase in ray tracing performance. These gains should be good to catch RTX 4080, but AMD was clear that they are not targeting RTX 4090 performance, which comes at a much higher price point, too.
AMD is attributing its big 54% performance/Watt generational gains to a revolutionary asynchronous clock domain technology that runs the various components on the GCD at different frequencies, to minimize power draw. This seems similar in concept to the "shader clock" on some older NVIDIA architectures.
AMD also announced FSR 3.0, the latest generation of its performance enhancement, featuring Fluid Motion technology. This is functionally similar to DLSS 3 Frame Generation, promising a 100% uplift in performance at comparable quality—essentially because the GPU is generating every alternate frame without involving its graphics rendering pipeline.
The new dual-independent media-acceleration engines enable simultaneous encode and decode for AVC and HEVC formats; hardware-accelerated encode and decode for AV1, and AI-accelerated enhancements. The new AMD Radiance Display Engine introduces native support for DisplayPort 2.1, with 54 Gbps display link bandwidth, and 12 bpc color. This enables resolutions of up to 8K @ 165 Hz with a single cable; or 4K @ 480 Hz with a single cable.
The "Navi 31" GPU in its full configuration has a raw compute throughput of 61 TFLOPs, compared to 23 TFLOPs of the RDNA2-based Navi 21 (a 165% increase). The shader and front-end of the GPU operate at different clock speeds, with the shaders running at up to 2.30 GHz, and the front-end at up to 2.50 GHz. This decoupling has a big impact on power-savings, with AMD claiming a 25% power-saving as opposed to running both domains at the same 2.50 GHz clock.
AMD claims the Radeon RX 7900 XTX to offer a 70% performance increase over the RX 6950 XT.

The complete slide-deck follows.
Add your own comment

336 Comments on AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

#126
nguyen
Let me take a guess here, 7900XTX will be:

5-10% faster than 4080 in Rasterization
25% slower than 4080 in RayTracing

1000usd seems like a fair price
Posted on Reply
#127
Garrus
nguyenLet me take a guess here, 7900XTX will be:

5-10% faster than 4080 in Rasterization
25% slower than 4080 in RayTracing

1000usd seems like a fair price
Nobody can help you, you can believe whatever you want. AMD showed off at least 50 percent faster than 6950 XT performance today. That is not 10 percent faster than the 4080, the card that is less than 60 percent of the 4090. It could be 30 percent faster than it.
HxxThere is no ADA lovelace competitor released (ie 4080). AMD cant run benchmarks against it because they dont have one and against a 4090 wont make sense to show graphs of 4090 beating their cards in most games. The most logical sense would be against their previous gen.
Besides, these reveals are meant to create FOMO , their thinking is -how do i make these cards look their best without running into legal issues? not so much - how do they stack against all other options. Even Nvidia when they unveiled 40 series, they showed some meaningless graphs where 4090 was 4x faster lol.
Exactly there is no point in showing comparisons against a card that costs 60 percent more. They need to wait for the RTX 4080 to release before they can release marketing materials showing comparisons.
GunShotFor what and going off what, though? Every function/feature for the 4090 was released at full launch. AMD came half-cocked like always. e.g. FSR 3.0 ETA is sometime in 2023 (WHAT?!), AI cores(?) uhm... but not devs/app controlled like Tensor's core (just GPU engine controlled, so, half AI?) nerfed titles in favour for AMD future promoted games didn't show hardly nothing performance wise (btw, why no vids, TPU?) Too many unknowns for RDNA3 and that shows insecurity and NVIDIA, and Intel, can smell fear.

AMD didn't show anything solid. Just a PowerPoint show, with tiny gaming vids with no real performance metrics, nothing lengthy anyway.
They showed charts claiming 50-70 percent faster performance. What more do you want. "didn't show anything solid" um, what?
GunShotI think the pricing right with the 4080 16GB will be another 3080Ti vs 6900 because at the end of the day, NVIDIA comes with the full package of features (insane RT numbers, etc.) and obviously better in neural networking prowess.

The only thing that irks me about my Strik 4090 (birthday gift from the wife and kids) is the lack for DP2.1.
The 4080 is the most cutdown '80 card released in the last decade. The 3080 was the least cutdown. I think you will be shocked at how bad the 4080 really is compared to your 3080-set expectations.
Posted on Reply
#128
Sabotaged_Enigma
btarunr320-bit wide memory bus (two of the MCDs are disabled). The two disabled MCDs are not "missing"
@btarunr Is this correct? I calculate many times only one MCD is disabled...
Posted on Reply
#129
wheresmycar
looks promising!!

But nope I'm not paying a dime over $800 for my next GPU upgrade. I still consider $500 a lot of money for a high performance GPU and already giving in to spending $800 for my next upgrade has left me a little displeased. I seriously thought post-covid by now we'd be closer to home at $800 for some top performing ~2022 newer Gen cards.... or are we still waiting for a double dose + booster vaccine for the GPU pandemic?

So no 7800** release for the close of 2022? If correct, that's a little disappointing. Or should i keep my hopes up with the end-2022 upgrade plan?
Posted on Reply
#130
nguyen
wheresmycarlooks promising!!

But nope I'm not paying a dime over $800 for my next GPU upgrade. I still consider $500 a lot of money for a high performance GPUs and already giving in to spending $800 for my next upgrade has left me a little distasteful. I seriously thought post-covid by now we'd be closer to home at $800 for some top performing ~2022 newer Gen cards.... or are we still waiting for a double dose + booster vaccine for the GPU pandemic?

So no 7800** release for the close of 2022? If correct, that's a little disappointing. Or should i keep my hopes up with the end-2022 upgrade plan?
Might as well grab a 6800XT for 550usd now if you don't care about RT, upscaling, video encoding, etc...
Posted on Reply
#131
Hyderz
Still a bit pricey, but hey better than nvidia's pricing.. look forward to your reviews
Posted on Reply
#132
MarsM4N
Knew the presentation will be full of sidekicks, lol. :laugh: $999/$899 & 355W/300W sounds really good for the top dogs.
Somehow I have a feeling they will shred the RTX4080(16GB), esp. for the price. Wouldn't be surprised if it's getting "unlaunched" too, LMAO.

TheLostSwedeWhat's with the three copper coloured "blades" in the heatsink?
In some pictures the three fins are red, in some more copper'ish. Maybe it's a distinguishing feature of the XT & XTX.
But one is for shure, the three colored fans stand for "RDNA3". ;)
Posted on Reply
#133
Chry
Following the pricing/performance scheme it looks like 7700 will be a very good, affordable choice for my medium-sized needs. Looking forward to it in a few months.
Posted on Reply
#134
maxfly
Reviews will be exciting for sure! Can't wait to see how these clock. Not that I'll partake but I like to live vicariously through others hehe.
Posted on Reply
#135
Tsukiyomi91
so far, Intel and AMD has placed the new DP 2.1 for their GPUs when NoVideo puts an old DP 1.4 on the newest GPU while charging a premium.
Posted on Reply
#136
Vayra86
nguyenLet me take a guess here, 7900XTX will be:

5-10% faster than 4080 in Rasterization
25% slower than 4080 in RayTracing

1000usd seems like a fair price
Let me take a guess here. You've been wearing green for too long, clouding your math skills & vision.
But I take the below with a bag of salt too. Still, salt included, let's consider they end up 20% below what's shown here. At 999; Nvidia is toast... or molten toast, depending on your power delivery.

So they lose 25% in RT, even including that AND -10~20% overall raster perf they're massively competitive, its just a much better deal throughout, and the performance is there regardless. Also, as others mentioned.... DP 1.4 on the 'most expensive card' is unforgivable.

www.techpowerup.com/forums/threads/amd-radeon-rx-7900-xtx-performance-claims-extrapolated-performs-within-striking-distance-of-rtx-4090.300648/

The only thing you got right in your post here is making the more sensible comparison with a 4080; but that card is so far below the 4090, it just doesn't even compete with 7900XTX, and barely touches the 899 XT. Nvidia better keep swinging those Ampere cards, because Ada right now is DOA from top to bottom, is my view. They'll need an aggressive price strategy to keep it afloat, not the current MSRPs.



Posted on Reply
#137
Chomiq
Now lets see the actual performance numbers.
Posted on Reply
#138
nguyen
Vayra86Let me take a guess here. You've been wearing green for too long, clouding your math skills & vision.
But I take the below with a bag of salt too. Still, salt included, let's consider they end up 20% below what's shown here. At 999; Nvidia is toast... or molten toast, depending on your power delivery.

So they lose 25% in RT, even including that AND -10~20% overall raster perf they're massively competitive, its just a much better deal throughout, and the performance is there regardless. Also, as others mentioned.... DP 1.4 on the 'most expensive card' is unforgivable.

www.techpowerup.com/forums/threads/amd-radeon-rx-7900-xtx-performance-claims-extrapolated-performs-within-striking-distance-of-rtx-4090.300648/
From TPU review
Posted on Reply
#139
Luminescent
The clues are all there, transitor count is much lower compared to Nvidia, still playing catchup in tehnologies like upscaling and raytracing and the price says it all.
In a game like cyberpunk with raytracing and upscaling enabled i expect AMD to lose big time, Nvidia is already making a cut down RTX 4090 to slightly beat 7900 XTX.
But does it matter ? most people won't buy these absurdly huge power hungry cards, what it matters is what they do in the low and mainstream area.
Posted on Reply
#140
Gungar
GunShotSo, basically, AMD just did a switch-a-roo naming-scheme here, in my opinion, huh?

Actually 7900xt = now a 7900xtx

Actually 7800xt = now a 7900xt

"It would look weird and it will expose us for charging $899 just for our 7800xt but, hey, let's do an NVIDIA but... a tad less... and our fans will defend is... Yeah!... the Board!" :laugh:
Not at all, the 6900 xt has a bus width of 320 like the standard 7900xt. The 7900 XTX has a bigger bus width and A LOT more cores than previous gen. They could have used a bigger naming number just for this totally legit.
Posted on Reply
#141
Bzuco
What is the correct number of shader units? 6144 according official AMD site, or 12288?
Posted on Reply
#142
AusWolf
It looks like AMD is following Nvidia with the top-down release. It makes sense as RDNA2 still has plenty of horsepower in the middle and low segments. It's a shame I don't care about flagship products, although $999 sounds a lot better than Nvidia's $1600. It really makes me wonder where the 7700 and 7600 series will land in specs, performance and price.
Posted on Reply
#143
mahoney
EatingDirtIt's pretty obvious. The numbers they provided were 50-70% faster than their current gen. That makes then 7900 XTX around 0-20% slower (depending on the game) in pure rasterization in games than the 4090. I expect they would be comparing it to the 4080, if it was out.

This was first and foremost a marketing presentation and at this time there's just nothing from Nvidia at $900-1000 price range that makes for a good showcase, in terms of performance, of their GPU's.
But they could have still compared it to the previous gen nvidia. Last Gen they compared the 2080ti vs the 6800 Seems like they're hiding the performance. Never seen them be so secretive about gpu performance since the days of Polaris where they crossfired them vs the gtx 1080
btk2k2IF they use the 3090Ti people will ask why not use the 4090. Given the 4090 has a 60% price premium they are not really targeting the same market and there is no 4080 16GB to compare against at a closer price point so they just compared to their old flagship.
Come on. They used a 3090 last gen to compare it to their flagship 6900xt But they couldn't do it now? Everything about their 'up to fps' looked shady as fuck
Posted on Reply
#144
AusWolf
mahoneyBut they could have still compared it to the previous gen nvidia. Last gen they compared the 6800 to the 2080ti.. Seems like they're hiding the performance. Never seen them be so secretive about gpu performance since the days of Polaris where they crossfired them vs the gtx 1080


Come on. They used the 2080ti for comparison 2 years ago But they couldn't do it now? Everything about their 'up to fps' looked shady as fuck
To be honest, I prefer straight up specs and performance numbers instead of comparisons with the competition during a product launch. Comparisons easily end up being a dirty shit-talk about the competition, like Intel demonstrated a couple years ago. It's just disingenuous. Talking about your own product shows more confidence. Product launches are nothing more than teasers anyway. As someone whose job involves holding presentations, I think AMD did a superb job here.

Edit: Link fixed.
Posted on Reply
#145
mahoney
AusWolfTo be honest, I prefer straight up specs and performance numbers instead of comparisons with the competition during a product launch. Comparisons easily end up being a dirty shit-talk about the competition, like Intel demonstrated a couple years ago. It's just disingenuous. Talking about your own product shows more confidence.
And the way they did it now isn't? It look like they're hiding the actual performance against Nvidia's flagship. Which probably means the 4090 could be way faster than anyone seems to realize. The way they started doing those product advertisements for the display port cable and later on for the monitor... Like jesus christ did none of you's alarm bells go off?
Posted on Reply
#146
Denver
GunShotFor what and going off what, though? Every function/feature for the 4090 was released at full launch. AMD came half-cocked like always. e.g. FSR 3.0 ETA is sometime in 2023 (WHAT?!), AI cores(?) uhm... but not devs/app controlled like Tensor's core (just GPU engine controlled, so, half AI?) nerfed titles in favour for AMD future promoted games didn't show hardly nothing performance wise (btw, why no vids, TPU?) Too many unknowns for RDNA3 and that shows insecurity and NVIDIA, and Intel, can smell fear.

AMD didn't show anything solid. Just a PowerPoint show, with tiny gaming vids with no real performance metrics, nothing lengthy anyway.
We don't need that, Nvidia can keep its fake frames and other disposable features like its self-destruct function.

AMD just delivered what most of us wanted, a card with a lot of performance at a reasonable price, have a nice day.
Posted on Reply
#147
medi01
GunShotbetter in neural networking prowess*.
And also at opening Chakras*

*your prowess in Chakra opening need to be as terrible as with neural networking, not to chuckle at both of these statements.
uftfaIt won't. AMD's own performance difference claims relative to 6950XT put the 7900XTX at roughly 10% below 4090.
6950XT beats last gen's NV's best in a number of titles.
7900XTX is very likely to beat 4090 in them, despite being a way smaller chip, with a way more modest power budget.
Posted on Reply
#148
mahoney
medi01And also at opening Chakras*

*your prowess in Chakra opening need to be as terrible as with neural networking, not to chuckle at both of these statements.


6950XT beats last gen's NV's best in a number of titles.
7900XTX is very likely to beat 4090 in them, despite being a way smaller chip, with a way more modest power budget.
If that was the case they'd have shown the cherry picked benchmarks with their sponsored games. But they didn't.
The same games the 6950xt beat the previous gen nvidia flagships have all been AMD's sponsored titles.
Posted on Reply
#149
medi01
nguyenRT, upscaling, video encoding, etc...
"etc"
That anti lag thing too right? :D
"AMD's answer ton NV's (answer to AMD's Radeon anti-lag)"

:D

All NV has going for it is the RT.
And enabling RT is Cyberpunk at 4k gives you want?
41 (!!!) fps!
That's a lot, ain't it?

Future proofing and all.

Hordes of "RT is mandatory" games are just around the corner, given how "nicey" a €2300+ GPU runs last gen AAA game.
Absolutely!
"Believe"... :roll:
mahoneythey'd have shown the cherry picked benchmarks
They have not done it before, listing out a balanced set of games.
Why would they start it now? Did that leather "2-4 times faster" guy bite them?
mahoneyAMD's sponsored titles.
Have you even checked the link, cough? GOW is "AMD sponsored"?

As if it mattered whether they are sponsored or not, anyhow.
Posted on Reply
#150
mahoney
Wow you are on some copium huh?


Every time AMD was confident in their gpu performances they showcased them vs Nvidia's.
Like for instance the 6000 series
6900xt they compared vs the 3090 and even used their SAM and Rage mode


The 6800 was compared to the 2080ti - again with SAM


Seems like you haven't noticed yet that almost every console port works better on the AMD gpu's
Posted on Reply
Add your own comment
May 4th, 2024 12:45 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts