Thursday, November 3rd 2022

AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

AMD today announced the Radeon RX 7900 XTX and Radeon RX 7900 XT gaming graphics cards debuting its next-generation RDNA3 graphics architecture. The two new cards come at $999 and $899—basically targeting the $1000 high-end premium price point.
Both cards will be available on December 13th, not only the AMD reference design, which is sold through AMD.com, but also custom-design variants from the many board partners on the same day. AIBs are expected to announce their products in the coming weeks.

The RX 7900 XTX is priced at USD $999, and the RX 7900 XT is $899, which is a surprisingly small difference of only $100, for a performance difference that will certainly be larger, probably in the 20% range. Both Radeon RX 7900 XTX and RX 7900 XT are using the PCI-Express 4.0 interface, Gen 5 is not supported with this generation. The RX 7900 XTX has a typical board power of 355 W, or about 95 W less than that of the GeForce RTX 4090. The reference-design RX 7900 XTX uses conventional 8-pin PCIe power connectors, as would custom-design cards, when they come out. AMD's board partners will create units with three 8-pin power connectors, for higher out of the box performance and better OC potential. The decision to not use the 16-pin power connector that NVIDIA uses was made "well over a year ago", mostly because of cost, complexity and the fact that these Radeons don't require that much power anyway.

The reference RX 7900-series board design has the same card height as the RX 6950 XT, but is just 1 cm longer, at 28.7 cm. It is also strictly 2.5 slots thick. There's some white illuminated elements, which are controllable, using the same software as on the Radeon RX 6000 Series. Both cards feature two DisplayPort 2.1 outputs, one HDMI 2.1a and one USB-C.
This is AMD's first attempt at a gaming GPU made of chiplets (multiple logic dies on a multi-chip module). The company has built MCM GPUs in the past, but those have essentially been the GPU die surrounded by HBM stacks. The new "Navi 31" GPU at the heart of the RX 7900 XTX and RX 7900 XT features seven chiplets—a central large graphics compute die (GCD), surrounded by six memory control-cache dies (MCDs). The GCD is built on the TSMC 5 nm EUV silicon fabrication process—the same one on which AMD builds its "Zen 4" CCDs—while the MCDs are each fabricated on the TSMC 6 nm process.

The GCD contains the GPU's main graphics rendering machinery, including the front-end, the RDNA3 compute units, the Ray Accelerators, the display controllers, the media engine and the render backends. The GCD physically features 96 RDNA3 Unified Compute Units (CUs), for 6,144 stream processors. All 96 of these are enabled on the RX 7900 XTX. The RX 7900 XT has 84 out of 96 unified compute units enabled, which works out to 5,376 stream processors. The new RDNA3 next-generation compute unit introduces dual-issue stream processors, which essentially double their throughput generation-over-generation. This is a VLIW approach, AMD does not double the rated shader count though, so it's 6144 for the full GPU (96 CU x 64 shaders per CU, not 128 shaders per CU).

Each of the six MCDs contains a 64-bit wide GDDR6 memory interface, and 16 MB of Infinity Cache memory. Six of these MCDs add up to the GPU's 384-bit wide memory interface, and 96 MB of total Infinity Cache memory. The GCD addresses the 384-bit wide memory interface as a contiguous addressable block, and not 6x 64-bit. Most modern GPUs for the past decade have had multiple on-die memory controllers making up a larger memory interface, "Navi 31" moves these to separate chiplets. This approach reduces the size of the main GCD tile, which will help with yield rates. The Radeon RX 7900 XTX is configured with 24 GB of GDDR6 memory across the chip's entire 384-bit wide memory bus, while the RX 7900 XT gets 20 GB of GDDR6 memory across a 320-bit wide memory bus (one of the MCDs is disabled). The disabled MCD isn't not "missing", but there's some dummy silicon dies there to provide stability for the cooler mounting.

Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD, and a second-generation RT accelerator that provides new dedicated instructions, and a 50% performance uplift in ray tracing performance. The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA's Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture.
For the RX 7900 XTX, AMD is broadly claiming an up to 70% increase in traditional raster 3D graphics performance over the previous-generation flagship RX 6950 XT at 4K Ultra HD native resolution; and an up to 60% increase in ray tracing performance. These gains should be good to catch RTX 4080, but AMD was clear that they are not targeting RTX 4090 performance, which comes at a much higher price point, too.
AMD is attributing its big 54% performance/Watt generational gains to a revolutionary asynchronous clock domain technology that runs the various components on the GCD at different frequencies, to minimize power draw. This seems similar in concept to the "shader clock" on some older NVIDIA architectures.
AMD also announced FSR 3.0, the latest generation of its performance enhancement, featuring Fluid Motion technology. This is functionally similar to DLSS 3 Frame Generation, promising a 100% uplift in performance at comparable quality—essentially because the GPU is generating every alternate frame without involving its graphics rendering pipeline.
The new dual-independent media-acceleration engines enable simultaneous encode and decode for AVC and HEVC formats; hardware-accelerated encode and decode for AV1, and AI-accelerated enhancements. The new AMD Radiance Display Engine introduces native support for DisplayPort 2.1, with 54 Gbps display link bandwidth, and 12 bpc color. This enables resolutions of up to 8K @ 165 Hz with a single cable; or 4K @ 480 Hz with a single cable.
The "Navi 31" GPU in its full configuration has a raw compute throughput of 61 TFLOPs, compared to 23 TFLOPs of the RDNA2-based Navi 21 (a 165% increase). The shader and front-end of the GPU operate at different clock speeds, with the shaders running at up to 2.30 GHz, and the front-end at up to 2.50 GHz. This decoupling has a big impact on power-savings, with AMD claiming a 25% power-saving as opposed to running both domains at the same 2.50 GHz clock.
AMD claims the Radeon RX 7900 XTX to offer a 70% performance increase over the RX 6950 XT.

The complete slide-deck follows.
Add your own comment

336 Comments on AMD Announces the $999 Radeon RX 7900 XTX and $899 RX 7900 XT, 5nm RDNA3, DisplayPort 2.1, FSR 3.0 FluidMotion

#276
Vayra86
birdie- HDMI 2.1 is still there. No DP 2.1 monitors on the market just yet. Will be released in 2023 at the earliest. DSC is still there with zero visible visual difference.
- Such heavy GPUs have existed before. I've not heard of any mass-reports about any issues related to its weight.
- It's not. The FE edition runs around 67C.
- Capped to ~315W (70% of TDP) it loses less than 5% of performance.
- The 90 series cards have always been super expensive. They are not for the average gamer.
- I don't give a .... about its UI. It works. At the same time I get lost in AMD's UI. Which tab do I need? Which place the option I'm looking for? Where's digital vibrance? People have been asking AMD for years for this option. Competive CSGO players do not touch AMD cards because of that.
- EVGA, what? Who the .... cares? 95% of the world have never seen EVGA cards.
- Out of literally tens of thousands of sold cards, fewer than a few dozen people have reported issues. And it looks likely all of them have done something wrong, including bending the cable too much or not properly inserting the adapter. Again, a card for the rich or professionals.

Literally not a single argument.

It's a free market. AMD is all yours. Remember Radeon 9800. Should I remind you about its release price? It was $400. Corporations are not your friend even though you want to love AMD so much.
I'm inclined to agree here.

The 4090 isn't a bad product. Its just pushed too far, in every metric: size/dimensions, wattage, price, and marketing. Its just another gen with just another Nv card on top of the stack, meanwhile, the price moves further away from reality every generation post-Pascal. Its literally overinflated, and it is also literally a better product in every way once you dial it back a little. Remarkably similar to Intel offerings - so here we have two companies with similar grasp on the markets over their direct competition, that use the same approach to keep generational leaps 'intact'.

Meanwhile, we've seen new Ryzen chips with similar, but yet still more conservative ideas and more sensible limitations - even if 95C is similarly uncanny, its still less, as is peak TDP, as is tuning out of the box. And GPU now confirms a similar thing for AMD.

The trend is there... but the approaches still differ as do the results. Time will tell what was the best way forward...
Posted on Reply
#278
Vayra86
ARFMeanwhile: Native ATX 3.0 16-Pin Cable Melts Too When Connected To An NVIDIA GeForce RTX 4090 (wccftech.com)

:kookoo:
You can keep your wccftech where the sun doesn't shine, don't even remotely try to make an argument based on that source, nor YT, with me. All I will point out is your own sheep mentality, scroll a few pages back for proof. Or zoom in on my avatar and try to consider what it tries to convey.

Your laugh smilies also don't suit you nor your responses. No need to make a fool of yourself.
Posted on Reply
#279
ARF
Vayra86You can keep your wccftech where the sun doesn't shine, don't even remotely try to make an argument based on that source, nor YT, with me. All I will point out is your own sheep mentality, scroll a few pages back for proof.
RTX 4090 is not a bad product, it's an awful product which should not exist at all. :D
The NVIDIA GeForce RTX 4090 is the newest graphics card from the company. Its physical size and power were said to be unmatched. However, since its release, the graphics card has been reported to overheat the connection, melting the connection port and the cable. A recent post on Reddit now shows that the native ATX 3.0 power supply using the 12VHPWR power connector is now having the same melting issues.
:D
Posted on Reply
#280
kapone32
Vayra86I'm inclined to agree here.

The 4090 isn't a bad product. Its just pushed too far, in every metric: size/dimensions, wattage, price, and marketing. Its just another gen with just another Nv card on top of the stack, meanwhile, the price moves further away from reality every generation post-Pascal. Its literally overinflated, and it is also literally a better product in every way once you dial it back a little. Remarkably similar to Intel offerings - so here we have two companies with similar grasp on the markets over their direct competition, that use the same approach to keep generational leaps 'intact'.

Meanwhile, we've seen new Ryzen chips with similar, but yet still more conservative ideas and more sensible limitations - even if 95C is similarly uncanny, its still less, as is peak TDP, as is tuning out of the box. And GPU now confirms a similar thing for AMD.
I watched Debauer load a Vbios to allow the 4090 to draw 1000 Watts that was at 1.35 volts. That is insane for a GPU to be able to pull that amount of power but speaking of performance increases it was underwhelming. outside of benchmark scores.
Posted on Reply
#281
Vayra86
ARFRTX 4090 is not a bad product, it's an awful product which should not exist at all. :D



:D
Right, let's go back to where we were; you overinflated the negatives, I'm bringing some nuance to that comparison by saying it was pushed too far. I'm fully aware of the 12VHPWR issue; but its like Intel CPUs; a power limited chip makes for a highly performant piece of silicon with great efficiency.

What are you looking for exactly, I don't get it.
kapone32I watched Debauer load a Vbios to allow the 4090 to draw 1000 Watts that was at 1.35 volts. That is insane for a GPU to be able to pull that amount of power but speaking of performance increases it was underwhelming. outside of benchmark scores.
Yeah, its the same shit we've been seeing since GPU Boost 3.0; the overclock is done out of the box. Any extra investment is futile.

But now we've crossed the barrier where not only is it OC'd out of the box, effectively (or just out of its efficiency curve), you also get to invest in specific cooling to keep the whole thing from running way below advertised speeds. These top end products aren't the price they specify. They're way, way more expensive, to get that last 5% of perf that gets eclipsed just a year later :p

Like I said elsewhere... we're in silly land with the top end of every chip stack on consumer right now. And it will continue until that penny drops, collectively, and we stop boasting about burning north of 1KW to play a game. Especially in this age.
Posted on Reply
#282
Zach_01
kapone32I didn't even think about that. Didn;t they increase the Infinty cache size? That alone should have benefits,
They actually reduce infinity cache amount over the RX6000 series (128MB >> 96MB) for the top GPUs.
Yet it is way faster and increases performance because of the new architecture interconnection between dies.



Basically we are talking about up to a few TB/s on actual effective bandwidth. That's why bit bus doesn't mean anything on AMD since infinity cache introduction.

So they can always increase it further beyond 96MB, maybe up to double it...
Posted on Reply
#283
ARF
Let's say that the 4090 are made for the "hall of fame" benchmarking and record scores for users called K|NGP|N and similar.
But for the regular market, the risk of running the card into a real fire hazard is something like 50-50, or it's close to impossible for the average users to keep the card alive.

I am not "overinflating" the negatives - the negatives do exist, and this time they are extremely serious - maybe the most serious since the original Fermi GTX 480 launch 13 years ago.
I will not support yours or anyone's "political correctness" and underestimation of the serious risks.
Posted on Reply
#284
medi01
Can someone confirm/deny that "4090 only 33% faster in Lumen than 3090Ti" is true?

If you wonder what the heck it is, here is it (Unreal 5 demo using Lumen):


Cool eh. That's using "SOFTWARE ray tracing". Now, "hardware RT" in Lumen should be faster, shouldn't it?
Let this sink in:


Lumen also comes with hardware ray-tracing, but most developers will be sticking to the former, as it’s 50% slower than the SW implementation even with dedicated hardware such as RT cores. Furthermore, you can’t have overlapping meshes with hardware ray-tracing or masked meshes either, as they greatly slow down the ray traversal process. Software Ray tracing basically merges all the interlapping meshes into a single distance field as explained above.

www.hardwaretimes.com/unreal-engine-5-lumen-vs-ray-tracing-which-one-is-better/#:~:text=Lumen%20also%20comes%20with%20hardware,hardware%20such%20as%20RT%20cores.
Vayra86a better product in every way once you dial it back a little
Indeed.
Such as 4080 12 GB after seeing what AMD is up to.

You dial it back a little:

nVidia "unlaunches" 4080 12GB

and suddenly it's a better product than before. :D
Posted on Reply
#285
AsRock
TPU addict
TheoneandonlyMrKIt wasn't the best performance preview, but it was an Architectural pr release, I would also say that both companies tend to make quick and big stride's in driver development over the first few months of a new architecture release.
It might make sense to let later reviews on newer driver's speak directly of its performance close to release on better driver's from AMD pov.

I believe they went chiplets to make they're cards viable while also advancing they're knowledge on 2.5D and 3D complexes and clearly were not ready for side by side GPU tile's, so a baby step with MCD and GCD was produced, imagine the same chip monolithic, it would have been big, expensive and in the same performance band anyway but, it would also likely have been 1600£, a harder sell.
Not the best?, it was terrible, i was not expecting it to be better or as good and just wanted some thing from them that did not get wrote up by what seemed like kids.

As for the GPU side by side i never expected it to be as they only just started chiplets, so step by step make extra money at the very least.

I'll buy one in Dec if i get the chance as i would of been happy with the 6900XT but AMD seem to cut support after 6-8 years or so and was thinking it be possibly cut sooner.
Posted on Reply
#286
RandallFlagg
AusWolfBecause there's no direct competition to Nvidia this time around. I thought we've already established that.
This is actually the thing people are arguing about. AMD pretty clearly has ceded the high end and will have nothing to compete beyond a 4080 16GB - and maybe not even that.

For this reason the price comparisons vs the 4090 are also fallacies. A $999 AMD flagship card that probably lands below the $1199 4080 16GB in performance. They are in the same ballpark in price, given that Nvidia has always commanded a 10-20% premium.

Not that it matters to 99% of folks, who are not getting a 4090 anyway. Most of the people arguing here have older mid or upper midrange GPUs (now low end) and probably aren't in the market for a new one anyway, so just arguing about something they aren't buying from either corporation.
Posted on Reply
#287
Zach_01
RandallFlaggThis is actually the thing people are arguing about. AMD pretty clearly has ceded the high end and will have nothing to compete beyond a 4080 16GB - and maybe not even that.

For this reason the price comparisons vs the 4090 are also fallacies. A $999 AMD flagship card that probably lands below the $1199 4080 16GB in performance. They are in the same ballpark in price, given that Nvidia has always commanded a 10-20% premium.

Not that it matters to 99% of folks, who are not getting a 4090 anyway. Most of the people arguing here have older mid or upper midrange GPUs (now low end) and probably aren't in the market for a new one anyway, so just arguing about something they aren't buying from either corporation.
How do you come up with that conclusion because almost nothing of what you wrote here adds up...
Everything so far suggests that the 900$ 7900XT will be the rival of the 1200$ 4080 16GB and the 1000$ 7900XTX will be just short of the 1600$ 4090, things that can also change with AIB OC variants.
All this at rasterization, just to be clear. RT on the new AMD GPUs is a full step behind RTX40 (3090/Ti region).
Posted on Reply
#288
RandallFlagg
Zach_01How do you come up with that conclusion because almost nothing of what you wrote here adds up...
Everything so far suggests that the 900$ 7900XT will be the rival of the 1200$ 4080 16GB and the 1000$ 7900XTX will be just short of the 1600$ 4090, things that can also change with AIB OC variants.
All this at rasterization, just to be clear. RT on the new AMD GPUs is a full step behind RTX40 (3090/Ti region).
Did you not read my post?

I said "A $999 AMD flagship card that probably lands below the $1199 4080 16GB in performance. They are in the same ballpark in price, given that Nvidia has always commanded a 10-20% premium."

The rest of your post is gibberish, as if Nvidia doesn't have AIB OC variants as well.
Posted on Reply
#289
Zach_01
RandallFlaggI said "A $999 AMD flagship card that probably lands below the $1199 4080 16GB in performance. They are in the same ballpark in price, given that Nvidia has always commanded a 10-20% premium."
Yes I did read it and nothing makes sense. Just like your mentioning of the low bit rate of the new(or future) GPUs on another thread(?)

The gap between 4090 and 4080 16 is big and at least 7900XTX will be between them if not the 7900XT also if the 6950XT vs 7900XTX (1.5x) holds up.



Posted on Reply
#290
RandallFlagg
Zach_01Yes I did read it and nothing makes sense. Just like your mentioning of the low bit rate of the new(or future) GPUs on another thread(?)

The gap between 4090 and 4080 16 is big and at least 7900XTX will be between them if not the 7900XT also if the 6950XT vs 7900XTX (1.5x) holds up.



It's pretty well known at this point, at least by those who have not deeply imbibed the AMD kool-aid and are blindly making some kind of excuse (albeit for what, I know not), that the 7900XT and possibly the 7900XTX are competitors to the 4080 16GB.

But you keep thinking what you want to think. I'm sure you'll have some reason or the other later on for being blind to the obvious...
[Radeon RX 7900 XTX] is designed to go against 4080 and we don’t have benchmarks numbers on 4080. That’s the primary reason why you didnt see any NVIDIA compares. […] $999 card is not a 4090 competitor, which costs 60% more, this is a 4080 competitor.

— Frank Azor to PCWorld


Posted on Reply
#291
Zach_01
RandallFlaggIt's pretty well known at this point, at least by those who have not deeply imbibed the AMD kool-aid and are blindly making some kind of excuse (albeit for what, I know not), that the 7900XT and possibly the 7900XTX are competitors to the 4080 16GB.

But you keep thinking what you want to think. I'm sure you'll have some reason or the other later on for being blind to the obvious...



I'm well aware of that statement. AMD can be very conservative on their statements at this point, for their reasons, and people can think of what they want and ignore the numbers.
Posted on Reply
#292
RandallFlagg
Zach_01I'm well aware of that statement. AMD can be very conservative on their statements at this point, for their reasons, and people can think of what they want and ignore the numbers.
Uh-huh.. the chief of marketing guy at AMD is being conservative about performance...............................................
Posted on Reply
#293
Zach_01
RandallFlaggUh-huh.. the chief of marketing guy at AMD is being conservative about performance...............................................
Yes I can understand your frustration...
Posted on Reply
#294
Nopa
@W1zzard What's really the HDMI version of the 7900 XT & 7900 XTX?
The latest 2022 2.1a 48Gbps or 2020 2.1 40Gbps like those of 6900 & 6950 XT's?
Posted on Reply
#295
Easo
ymdhisThe problem is that the budget cards will cost $5-600 too.
Yep, exactly this - and they are what majority of people usually buy.
Posted on Reply
#296
ARF
EasoYep, exactly this - and they are what majority of people usually buy.
No.

Current pricing in Germany:

Radeon RX 6400 - 169.00
Radeon RX 6500 XT - 199.00
Radeon RX 6600 - 261.99

Radeon RX 6600 XT - 339.00
Radeon RX 6650 XT - 339.00
Radeon RX 6700 XT - 439.00
Radeon RX 6750 XT - 499.90
Radeon RX 6800 - 559.00
Radeon RX 6800 XT - 635.90
Radeon RX 6900 XT - 748.00

Radeon RX 6950 XT - 899.00

The majority of people will buy up to the RX 6650 XT that goes for 339 as of now but its price should go downward spiralling because it's crappy for 1080p only.
Posted on Reply
#297
tvshacker
ARFNo.

Current pricing in Germany:

Radeon RX 6400 - 169.00
Radeon RX 6500 XT - 199.00
Radeon RX 6600 - 261.99

Radeon RX 6600 XT - 339.00
Radeon RX 6650 XT - 339.00
Radeon RX 6700 XT - 439.00
Radeon RX 6750 XT - 499.90
Radeon RX 6800 - 559.00
Radeon RX 6800 XT - 635.90
Radeon RX 6900 XT - 748.00

Radeon RX 6950 XT - 899.00

The majority of people will buy up to the RX 6650 XT that goes for 339 as of now but its price should go downward spiralling because it's crappy for 1080p only.
What about the 6700 non-XT?
Posted on Reply
#298
medi01
RandallFlaggnothing to compete beyond a 4080 16GB - and maybe not even that.
In which lala-land will AMD have "nothing to compete" with a 40% cutdown of 4090?
Tell us more about "unlaunching" of the other 4080....

:D
Posted on Reply
Add your own comment
Sep 17th, 2024 05:02 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts