Tuesday, November 15th 2022

AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080

AMD in its technical presentation confirmed the reference clock speeds of the Radeon RX 7900 XTX and RX 7900 XT RDNA3 graphics cards. The company also made its first reference to a GeForce RTX 40-series "Ada" product, the RTX 4080 (16 GB), which is going to launch later today. The RX 7900 XTX maxes out the "Navi 31" silicon, featuring all 96 RDNA3 compute units or 6,144 stream processors; while the RX 7900 XT is configured with 84 compute units, or 5,376 stream processors. The two cards also differ with memory configuration. While the RX 7900 XTX gets 24 GB of 20 Gbps GDDR6 across a 384-bit memory interface (960 GB/s); the RX 7900 XT gets 20 GB of 20 Gbps GDDR6 across 320-bit (800 GB/s).

The RX 7900 XTX comes with a Game Clocks frequency of 2300 MHz, and 2500 MHz boost clocks, whereas the RX 7900 XT comes with 2000 MHz Game Clocks, and 2400 MHz boost clocks. The Game Clocks frequency is more relevant between the two. AMD achieves 20 GB memory on the RX 7900 XT by using ten 16 Gbit GDDR6 memory chips across a 320-bit wide memory bus created by disabling one of the six 64-bit MCDs, which also subtracts 16 MB from the GPU's 96 MB Infinity Cache memory, leaving the RX 7900 XT with 80 MB of it. The slide describing the specs of the two cards compares them to the GeForce RTX 4080, which is what the two could compete more against, especially given their pricing. The RX 7900 XTX is 16% cheaper than the RTX 4080, and the RX 7900 XT is 25% cheaper.
Add your own comment

166 Comments on AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080

#26
aciDev
Hyderzi think most people should skip the 40 series since 30 series is pretty decent and i have a feeling nvidia will bring out the 50 series earlier instead of 2 years since the launch of rtx30 series
Since 40 series was delayed due to a number of reason which i wont go through since the launch of 30 series.
RichardsThe ray tracing performance looks horrid .. the 4080 will land the knockout punch
HxxI don’t think anyone expects better RT out of AMD when compared to nvidia . However in terms of raster performance, that’s where AMD will likely win so it will come down to what matters to the buyer
TossWHO CARES ABOUT RAY TRACING. ARE OU OUT YO MIND?
the54thvoidPersonally quite excited to see these cards. The lower RT performance isn't an issue to me if it's Ampere level. The price is important.
Towelie00reality is who care about RT below 60fps at 4K like 4090
spnidelif only ray tracing made games good
pat-ronerCouldn't care less about RT
If barely decent video content(which is ray-traced by nature) is produced, is RT really expected to make games better today or in the future?

I am still waiting, for many years, for realistic audio in games. I'm probably the only one, as industry interest is next to zero.
Posted on Reply
#27
nguyen
AusWolfAh, so your 4090 isn't artificially limited by having 2048 of its GA102's shaders disabled? ;)

I don't care about having the best, but I do care about having the best in the price and performance range I consider sensible for my needs.


Then why do you care about a locked voltage/frequency curve?
Do you even read reviews? does 4090 performance dissappoint anyone with its 2048 shaders disabled?

Meanwhile locking voltage/freq is a dick move, especially on top-of-the-line GPU where enthusiasts (who are more likely to buy these products) like to overclock
Posted on Reply
#28
AusWolf
bugI also don't get why people get hung up on dies being fully enabled or not. You get the product benched as-is and you know very well what it is capable of.
Because it's a sign that Nvidia leaves performance on the table just to sell it for more money later, even if it's really due to yield issues (which I highly doubt).
nguyenDo you even read reviews? does 4090 performance dissappoint anyone with its 2048 shaders disabled?
I do read reviews, and the 4090 indeed does disappoint:
nguyenMeanwhile locking voltage/freq is a dick move, especially on top-of-the-line GPU where enthusiasts (who are more likely to buy these products) like to overclock
Enthusiasts should realise that overclocking is a thing of the past - we live in an age when GPUs and CPUs come delivering their fullest right out of the box. If you want some personalisation, you should underclock/undervolt more than anything, imo.
Posted on Reply
#29
bug
AusWolfBecause it's a sign that Nvidia leaves performance on the table just to sell it for more money later, even if it's really due to yield issues (which I highly doubt).
Well, improved yields are untapped potential. What would you want Nvidia to do with that?
Posted on Reply
#30
AusWolf
bugWell, improved yields are untapped potential. What would you want Nvidia to do with that?
If AMD can sell fully enabled chips even in their highest-end products, then so can Nvidia.
Posted on Reply
#31
Snoop05
AusWolfBecause it's a sign that Nvidia leaves performance on the table just to sell it for more money later, even if it's really due to yield issues (which I highly doubt).


I do read reviews, and the 4090 indeed does disappoint:



Enthusiasts should realise that overclocking is a thing of the past - we live in an age when GPUs and CPUs come delivering their fullest right out of the box. If you want some personalisation, you should underclock/undervolt more than anything, imo.
If you decide to compare "value" of top end SKU, why not do so in 4K?
Also some entries on this chart is pure comedy - Intel Arc cards especially
Posted on Reply
#32
Vayra86
ValantarI'm a bit surprised at the drop in clocks and CUs for the XT v. the XTX, to be honest - the drop in power seems a bit small compared to that difference, with 10% less CUs and 10% lower clocks for just ~15% less power. Makes me wonder whether the XT will either often boost higher than spec, or if it'll just have a ton of OC headroom - or if it's explicitly configured to allow essentially any silicon to pass binning for that SKU.

Either way, very much looking forward to benchmarks of these!



None, since it doesn't launch till December 3rd?
Clearly binning imho.

XTX is just a better chip.
Maybe they use a pretty low target for the bin on XT so that they can keep the price relatively low for both.

Its also a new type of product wrt the chips they use. I think they're conservative to keep yields up, so again, binning.
Posted on Reply
#33
bug
AusWolfIf AMD can sell fully enabled chips even in their highest-end products, then so can Nvidia.
They could, but that would mean they just design a smaller chip and when the yields improve, they'd had nothing better to sell. How would that aid you?
Posted on Reply
#34
AusWolf
Snoop05If you decide to compare "value" of top end SKU, why not do so in 4K?
Also some entries on this chart is pure comedy - Intel Arc cards especially
The 4K chart doesn't look much better, either:
bugThey could, but that would mean they just design a smaller chip and when the yields improve, they'd had nothing better to sell. How would that aid you?
If it's all about size, then why do they do the same with their lower-end chips, like the GA104?
Posted on Reply
#35
Dimitriman
I'm waiting for RT performance reviews before making a judgement on this card. But not RT + frame generating / upscaling etc etc. Just pure RT vs RT on 7900 vs 4080.
If the RT performance gap is not wider than previous gen, I think this card will be a good choice vs. the 4080.

I'm also curious to see what FSR 3.0 will bring, and I am thankful for the power requirements of this 7900 series. Having said that, it's generally very upsetting to see both normalizing $1k~$2k for high end gaming GPUs. This used to be a whole system budget not too long ago.
Posted on Reply
#36
Vayra86
bugI also don't get why people get hung up on dies being fully enabled or not. You get the product benched as-is and you know very well what it is capable of.
That is probably related to a guesstimate about binning and that golden lottery feeling.

The reasoning: if the chip is already cut down, its already not 'perfect', so it makes sense the rest of that chip is also less likely to be optimal. This doesn't translate to practice for most, but the emotion is what it is.

Another emotional aspect: 'I'm paying this much, it better be 'the whole thing'.
AusWolfIf it's all about size, then why do they do the same with their lower-end chips, like the GA104?
They have volume on each SKU; they move enough units to do the process of gradual improvement on each one. And it moves both ways, remember the re-purposed 104s on lower end products.
Posted on Reply
#37
mb194dc
Both 4080 and 7900 series are way too expensive. Only worth bothering with for those who need 100fps+ at 4k. Current gen will do 4k 60 just fine in pretty much everything. Even 4090 won't run Cyber Punk with RT without serious image quality damage.
Posted on Reply
#38
ZoneDymo
aciDevIf barely decent video content(which is ray-traced by nature) is produced, is RT really expected to make games better today or in the future?

I am still waiting, for many years, for realistic audio in games. I'm probably the only one, as industry interest is next to zero.
no no, im fully with you, and I LOVE and encourage reviewers talking about sound quality.

I remember when BF Bad Company 2 came out and how there was a focus on the audio and everyone talked about it.
I thought it was a turning point but....alas.....

And yet we all know how important audio is to the experience, yet the budget and attention it gets is zero.

Hell I remember AMD also back in the day, I think it was related to what eventually became Vulkan, that they also had something that was meant to focus and increase the quality of audio in games.


That said, being a fan of Digital Foundry's work, I appreciate RT and what it can do/does
Posted on Reply
#39
bug
Vayra86That is probably related to a guesstimate about binning and that golden lottery feeling.

The reasoning: if the chip is already cut down, its already not 'perfect', so it makes sense the rest of that chip is also less likely to be optimal. This doesn't translate to practice for most, but the emotion is what it is.

Another emotional aspect: 'I'm paying this much, it better be 'the whole thing'.


They have volume on each SKU; they move enough units to do the process of gradual improvement on each one. And it moves both ways, remember the re-purposed 104s on lower end products.
My point exactly: this is all psychological, it has no consequences irl, other than allowing faster future products without having to redesign the silicon (read: cheaper).
Posted on Reply
#40
wolf
Better Than Native
TossWHO CARES ABOUT RAY TRACING. ARE OU OUT YO MIND?
ME. NO.
spnidelif only ray tracing made games good
In the same way it cannot make games good, good games can also be enhanced by it.

We get it, lots of you don't care about RT, you've only been shouting that from the rooftops since Turings debut, believe me, we hear you loud and clear.

Would ya'll please be mindful there is a subset of enthusiasts who do want good RT performance in future purchases, for it not to be an afterthought, a checkbox to tick.

If it's not for you, awesome, but I tire of hearing nObOdY cArEs ABouT rAy TrACinG when clearly people do, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is a gimmick or that 'most' people don't care, good on you!).

Personally I'd love to be able to very strongly consider AMD GPU's, but a prerequisite of that is for them to take RT more seriously, and lessen the hit, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.
Posted on Reply
#41
Dirt Chip
Whoever can pay 1000$+ for a GPU can also pay 1700$+ for a GPU.
In those price category, absolute pref is the bar- not price\performance or even power.
Their might be a small group of users who will push for the 1000$ range at a stretch but no more but most can just droop extra 700-1000$ without real problem- gaming is their hobby and thus it is a legit cost.
To be clear- I`m not one of those, quite the opposite (see my current GPU), but it`s the realty today. Many are willing to pay whatever extra for the ultimate quality/ performance.
Posted on Reply
#42
medi01
RichardsThe ray tracing performance looks horrid
November 2022: 3090Ti RT performance looks horrid. (a random leather man fan)
Posted on Reply
#43
bug
wolfME. NO.

In the same way it cannot make games good, good games can also be enhanced by it.

We get it, lots of you don't care about RT, you've only been shouting that from the rooftops since Turings debut, believe me, we hear you loud and clear.

Would ya'll please be mindful there is a subset of enthusiasts who do want good RT performance in future purchases, for it not to be an afterthought, a checkbox to tick.

If it's not for you, awesome, but I tire of hearing nObOdY cArEs ABouT rAy TrACinG when clearly people do, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is a gimmick or that 'most' people don't care, good on you!).

Personally I'd love to be able to very strongly consider AMD GPU's, but a prerequisite of that is for them to take RT more seriously, and lessen the hit, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.
Goes right up there with "who cares about a second mouse button"...
Posted on Reply
#44
medi01
AusWolfWhere's all the people who cried at the launch event that the 7900 XTX doesn't match up against the 1.6x more expensive 4090?
I doubt the $1600 MSRP of 4090. It is likely fake figure, as with 2080Ti which was claimed to have MSRP of $999 but sold for 20% more.

The cheapest 4090 in Germany is 2300+. That is 1860+ if you strip VAT.

Also, cough:



www.tomshardware.com/news/igors-lab-evga-decision-leaving-gpus-is-its-fault
Posted on Reply
#45
btk2k2
ValantarYeah, that's pretty much my thinking exactly. While yields/defect rates for TSMC N5 aren't public in the same way they were for N7, we still know that they are good - good on a level where they're churning out massive amounts of large-ish dice with very high yields. Combine that with even AMD's biggest GPU die now being <400mm² and, like you say, most likely the vast majority of dice will qualify for the XTX SKU. We saw the exact same thing with Navi 21 on N7, where the 6900 XT was always (relatively) plentiful, while 6800 XT and 6800 supplies were nonexistent at times, and even at the best of times very limited, simply because AMD would rather sell a fully functioning die as a 6900 XT than cut it down to sell as a cheaper SKU.

Of course they're not in the same supply-constrained market today, so they're going to need to be a bit more flexible and more willing to "unnecessarily" sell parts as a lower bin than they actually qualify for - this has been the norm in chipmaking forever, after all. But I still expect their first push to be plenty of XTXes, and notably fewer XTs. This also (assuming the XT PCB is also used for the 7800 XT) makes the XTX having its own PCB more understandable - it's likely supposed to have enough volume to recoup its own costs, while the XT is designed to be an in-between SKU with more cost optimization. Which is kind of crazy for a $900 GPU, but it's not like those cost optimizations are bad, it just looks slightly less over-the-top.

It will definitely be interesting to see what the power limits for the XT will be like - AMD has had some pretty strict power limits for lower end SKUs lately, like the RX 6600, but nothing like that for a high end, high power SKU. It also raises the question of what premium AIB models of the XT will be like, as AMD is making it pretty clear that there'll be heavily OC'd partner XTXes. That might also be part of the pricing strategy here - with a mere 10% difference, ultra-premium 3GHz XTs don't make as much sense, as they'd cost more than a base XTX - so the upsell to an equally ultra-premium XTX would be relatively easy. And AMD makes money on selling chips after all, not whole GPUs, so they'd always want to sell the more premium SKU.

Also definitely looking forward to seeing what the 7800 XT will be - a further cut down Navi 31? If Navi 32 has the rumored CU count, they'd need another in-between SKU (the poor competitive performance of the 6700 XT demonstrated how they can't leave gaps that big in their lineup), but with die sizes being this moderate and GPUs being relatively affordable and easy to tape out compared to most chips (being massive arrays of identical hardware helps!) I could see AMD launching more Navi 3X dice than 2X from that fact alone.
An N32 based 7800XT with 2.8Ghz clocks would have it matching the 7900XT in pure shader performance with a loss of cache, bandwidth and ROP performance. I see that as more likely than further cuts to N31 since it will have more supply than a cut N31 and if pricing is around $650-700 it would be a pretty popular choice with a very healthy margin for AMD.

This would mean in stock configurations the 7800XT would be better value than the 7900XT and the 7900XTX would also be better value than the 7900XT, however, depending on what is failing to cause parts to be in the 7900XT bin it is possible that overclocking is rather strong on the card so even though stock performance is not that great from a value perspective the overclocked performance could be enough that some enthusiast buyers who like to tinker could see value in the 7900XT even at $900 ensuring there is a market for it, albeit a small low volume market that allows AMD to focus more on the XTX sku.

Then there is the cut N32 die. Will AMD bother with a vanilla 7800 or would they just cut N32 to about 5k shaders, pair it with 3MCDs and call it a 7700XT? I personally think the later and with a 200mm^2 die you are looking at stupid numbers per wafer so supply of 7700XT and 7800XT should be far far better than supply of 6800XT and 6700XT was even if the 7700XT has to use perfectly good dies AMD will have calculated for that.
Posted on Reply
#46
Icon Charlie
TossWHO CARES ABOUT RAY TRACING. ARE OU OUT YO MIND?
I agree. The only people that care are those who drank the cool aid over the years and are justifying their purchases.
Posted on Reply
#47
ratirt
nguyenDo you even read reviews? does 4090 performance dissappoint anyone with its 2048 shaders disabled?
Yes it does disappoint and here is why.
If you look closer at reviews, 4090 is basically double the performance of a 3080 10GB. 3080 MSRP was set to $700 at launch which we all know was so damn high at that time. Not to mention the enormous street prices. You may argue 4090 is faster than 3090 and 3090 Ti and it is cheaper or has a better performance per $ ratio the problem is those 3090 and 3090Ti had a crazy pricing anyway. Also 4090 is the fastest thus the stupid pricing and obviously you will have to pay premium for it. There is one problem though. It is not the fastest because it is not fully unlocked. You buy crippled GPU for astronomical price which will be replaced by a GPU with a higher price than ridiculous but fully unlocked.
4090 so far, despite its performance, has disappointed in all other fronts. 4080 16GB? Same thing considering its price. That is basically for every GPU NV is planning to release so far with the current information we have. Hopefully something will change but I doubt it.
Posted on Reply
#48
nguyen
ratirtYes it does disappoint and here is why.
If you look closer at reviews, 4090 is basically double the performance of a 3080 10GB. 3080 MSRP was set to $700 at launch which we all know was so damn high at that time. Not to mention the enormous street prices. You may argue 4090 is faster than 3090 and 3090 Ti and it is cheaper or has a better performance per $ ratio the problem is those 3090 and 3090Ti had a crazy pricing anyway. Also 4090 is the fastest thus the stupid pricing and obviously you will have to pay premium for it. There is one problem though. It is not the fastest because it is not fully unlocked. You buy crippled GPU for astronomical price which will be replaced by a GPU with a higher price than ridiculous but fully unlocked.
4090 so far, despite its performance, has disappointed in all other fronts. 4080 16GB? Same thing considering its price. That is basically for every GPU NV is planning to release so far with the current information we have. Hopefully something will change but I doubt it.
Couldn't care less if 4090 is crippled chip or 4090Ti come out in a year, I care that my 4090 is not artificially handicapped just so that Nvidia can sell 4090 OC edition.

Affordability is kinda relative, 1600usd is kinda pocket change for some people :)
Posted on Reply
#49
Valantar
nguyenYes yes, very sympathetic when AMD lock 6900XT voltage/freq in order to sell higher SKUs: 6900XT LC, 6900XTXH, 6950XT
Does the 6900 XT have locked voltage/frequency? I know mine is a Navi 21 XTX die ("Ultimate"), but that's just a special bin picked for scaling better to higher clock speeds at higher voltages. Is the stock, Navi 21 XT 6900 XT locked down in terms of adjusting clock speeds or frequencies? Yes, I know there's a frequency ceiling for what can be set in software, but in my experience that tends to be higher than what can be done stably without exotic cooling anyhow, so I don't see the issue. All I've noticed about mine is that it doesn't really undervolt at all, but that's just a characteristic of that bin of the silicon - it still gets stupidly efficient with a moderate underclock.
bugThat is actually a cost-saving measure. Nvidia traditionally engineers more complex chips. Yields for those are not that good at first. So you get the "not fully enabled" dies. Once production steps up, yields improve and fully unlocked chips become more viable. If they pushed for fully enabled dies you end up either with more expensive dies or with cut-down ones (to the level that can be produced initially) with nowhere to go once yields improve.
This is IMO a pretty reasonable approach - but in the market, it has the effect of saying "hey, this is the new cool flagship, the best of the best" only for 6 months to pass and them to say "hey, forget that old crap, this is the best of the best!" Which, regardless of the realities of production, is a pretty shitty move when the most explicit selling point of the previous product was precisely how it was the best. There's obviously a sliding scale of how shitty this is simply due to something faster always being on the way, but IMO Nvidia tends to skew towards pissing on their fans more than anything in this regard.
bugI also don't get why people get hung up on dies being fully enabled or not. You get the product benched as-is and you know very well what it is capable of.
This I wholeheartedly agree with. Whether a die is fully enabled or not is entirely irrelevant - what matters is getting what you're paying for, as well as having some base level honesty in marketing.
wolfME. NO.

In the same way it cannot make games good, good games can also be enhanced by it.

We get it, lots of you don't care about RT, you've only been shouting that from the rooftops since Turings debut, believe me, we hear you loud and clear.

Would ya'll please be mindful there is a subset of enthusiasts who do want good RT performance in future purchases, for it not to be an afterthought, a checkbox to tick.

If it's not for you, awesome, but I tire of hearing nObOdY cArEs ABouT rAy TrACinG when clearly people do, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is a gimmick or that 'most' people don't care, good on you!).

Personally I'd love to be able to very strongly consider AMD GPU's, but a prerequisite of that is for them to take RT more seriously, and lessen the hit, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.
I mostly agree with this, in fact I waited to buy a new GPU in order to get RT support - but I'm also perfectly fine with RT performance on my 6900 XT. I loved Metro Exodus with RT enabled at 1440p, and while I've only barely tried Control, that too seemed to work fine. Is the 6900 XT perfect? Obviously not. Are Nvidia's contemporary offerings faster? Yes - but not that much faster, not enough that it'll matter in 2-3 years as RT performance becomes more important. And either way, my GPU beats both current gen consoles in RT, so I'll be set for base level RT performance for the foreseeable future.

RT performance is absolutely an important aspect of the value of Nvidia's GPUs - the question is how important. For me, it's ... idk, maybe 5:1 raster-v-RT? Rasterization is a lot more important overall, and for the foreseeable lifetime of this product and its contemporaries, I don't see the delta between them as that meaningful long term. When my 6900 XT performs between a 3070 and 3080 in RT depending on the title, and the 3090 Ti is maybe 20% faster than those on average, that means they'll all go obsolete for RT at roughly the same time. There are absolutely differences, but I don't see them as big enough to dismiss AMD outright.
Posted on Reply
#50
ratirt
nguyenCouldn't care less if 4090 is crippled chip or 4090Ti come out in a year, I care that my 4090 is not artificially handicapped just so that Nvidia can sell 4090 OC edition.

Affordability is kinda relative, 1600usd is kinda pocket change for some people :)
You're missing the point here as usual. Also you need to look for different arguments. By your logic, you may argue that not fully enabled chip is a handicap. Insufficient cooling on a chip is a handicap as well. Insufficient power delivery can be considered a handicap.
These all above can be considered a handicap which in my book is silly to even talk about it. Your argument belongs in the same category.
Posted on Reply
Add your own comment
Dec 23rd, 2024 11:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts