Tuesday, November 15th 2022

AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080

AMD in its technical presentation confirmed the reference clock speeds of the Radeon RX 7900 XTX and RX 7900 XT RDNA3 graphics cards. The company also made its first reference to a GeForce RTX 40-series "Ada" product, the RTX 4080 (16 GB), which is going to launch later today. The RX 7900 XTX maxes out the "Navi 31" silicon, featuring all 96 RDNA3 compute units or 6,144 stream processors; while the RX 7900 XT is configured with 84 compute units, or 5,376 stream processors. The two cards also differ with memory configuration. While the RX 7900 XTX gets 24 GB of 20 Gbps GDDR6 across a 384-bit memory interface (960 GB/s); the RX 7900 XT gets 20 GB of 20 Gbps GDDR6 across 320-bit (800 GB/s).

The RX 7900 XTX comes with a Game Clocks frequency of 2300 MHz, and 2500 MHz boost clocks, whereas the RX 7900 XT comes with 2000 MHz Game Clocks, and 2400 MHz boost clocks. The Game Clocks frequency is more relevant between the two. AMD achieves 20 GB memory on the RX 7900 XT by using ten 16 Gbit GDDR6 memory chips across a 320-bit wide memory bus created by disabling one of the six 64-bit MCDs, which also subtracts 16 MB from the GPU's 96 MB Infinity Cache memory, leaving the RX 7900 XT with 80 MB of it. The slide describing the specs of the two cards compares them to the GeForce RTX 4080, which is what the two could compete more against, especially given their pricing. The RX 7900 XTX is 16% cheaper than the RTX 4080, and the RX 7900 XT is 25% cheaper.
Add your own comment

166 Comments on AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080

#51
Valantar
nguyenCouldn't care less if 4090 is crippled chip or 4090Ti come out in a year, I care that my 4090 is not artificially handicapped just so that Nvidia can sell 4090 OC edition.

Affordability is kinda relative, 1600usd is kinda pocket change for some people :)
But as has been discussed above it is handicapped so that Nvidia can come back in half a year and sell a 4090 Ti with 2048 more shaders. Whether the limitation is disabled shaders or power/clock limitations is immaterial - it's all product segmentation, just by marginally different means.
Posted on Reply
#52
ixi
Looks like I know my next gpu :}. XTX I'm waiting for jū.
Posted on Reply
#53
nguyen
ratirtYou're missing the point here as usual. Also you need to look for different arguments. By your logic, you may argue that not fully enabled chip is a handicap. Insufficient cooling on a chip is a handicap as well. Insufficient power delivery can be considered a handicap.
These all above can be considered a handicap which in my book is silly to even talk about it. Your argument belongs in the same category.
Handicap vs being artificially handicapped are two separate things, seem like you can't tell the differrence.
ValantarBut as has been discussed above it is handicapped so that Nvidia can come back in half a year and sell a 4090 Ti with 2048 more shaders. Whether the limitation is disabled shaders or power/clock limitations is immaterial - it's all product segmentation, just by marginally different means.
At this point in time there is no certainty that Nvidia will ever release 4090Ti, if 7900XTX and its higher binned variances cannot compete with 4090, Nvidia might just let 4090 be the best for 2 years (like 2080Ti Super rumor, Nvidia just like to keep an ace up their sleeves)
Posted on Reply
#54
Dirt Chip
ratirtYes it does disappoint and here is why.
If you look closer at reviews, 4090 is basically double the performance of a 3080 10GB. 3080 MSRP was set to $700 at launch which we all know was so damn high at that time. Not to mention the enormous street prices. You may argue 4090 is faster than 3090 and 3090 Ti and it is cheaper or has a better performance per $ ratio the problem is those 3090 and 3090Ti had a crazy pricing anyway. Also 4090 is the fastest thus the stupid pricing and obviously you will have to pay premium for it. There is one problem though. It is not the fastest because it is not fully unlocked. You buy crippled GPU for astronomical price which will be replaced by a GPU with a higher price than ridiculous but fully unlocked.
4090 so far, despite its performance, has disappointed in all other fronts. 4080 16GB? Same thing considering its price. That is basically for every GPU NV is planning to release so far with the current information we have. Hopefully something will change but I doubt it.
4090 is not the very top GPU you speak of. It is only a mere mid-top tier GPU.
The 4090 ti is the top-top tier (unless a 4090 super ti is coming) with the full fat and it`s cost, when released, will represent that- be sure abot that :)
NV is just using that "psychological human error" (read: being an average human) that mae you think that if the GPU isn`t whole than "it is crippled" to make even more profit.
Vayra86Another emotional aspect: 'I'm paying this much, it better be 'the whole thing'.
The emotional, psychological aspect is playing a major role and we all can see it very clearly in the forum.
Make you change, sometime completely, your choice against solid data and proven facts.
Every business company that respect itself will exploit this 'merit' to the max. NV and Intel are specifically excel in that explointment, AMD still have some miles to cover but it is doing very good to catch up.
Posted on Reply
#55
Valantar
nguyenAt this point in time there is no certainty that Nvidia will ever release 4090Ti, if 7900XTX and its higher binned variances cannot compete with 4090, Nvidia might just let 4090 be the best for 2 years (like 2080Ti Super rumor, Nvidia just like to keep an ace up their sleeves)
... that's literally the point, and the only reason why fully enabled dice matter at all: that way you know that there isn't something faster coming. Without that, you can never know. But all precedent shows that Nvidia will launch a faster SKU mid-generation. They did so for Kepler, for Maxwell, for Pascal, for Ampere. Turing is the only exception to this over the past decade, and that had mid-gen refreshes for literally everything but the top SKU.
Dirt Chip4090 is not the very top GPU you speak of. It is only a mere mid-top tier GPU.
The 4090 ti is the top-top tier with the full fat and it`s cost, when released, will represent that- be sure abot that :)
NV is just using that "psychological human error" (read: being an average human) that mae you think that if the GPU isn`t whole than "it is crippled" to make even more profit.
No, it is a top-tier GPU. It's just not guaranteed to stay as the top-tier GPU. It'll still be top-tier if/when they launch a 4090 Ti, simply because the Ti will only be marginally faster due to the relatively minor hardware differences. The issue isn't the veracity of whether or not it "actually" is a top-tier GPU, but the inherently shitty move of selling something as "the best of the best" only to undermine this shortly afterwards with a "well, actually..." launch. If someone promises you through marketing that the product you're buying is the best, then it's reasonable to expect it to stay as the best for a while, and to expect that the maker of said product isn't planning to supersede it in a few months' time.
Posted on Reply
#56
Dirt Chip
ValantarNo, it is a top-tier GPU. It's just not guaranteed to stay as the top-tier GPU. It'll still be top-tier if/when they launch a 4090 Ti, simply because the Ti will only be marginally faster due to the relatively minor hardware differences. The issue isn't the veracity of whether or not it "actually" is a top-tier GPU, but the inherently shitty move of selling something as "the best of the best" only to undermine this shortly afterwards with a "well, actually..." launch. If someone promises you through marketing that the product you're buying is the best, then it's reasonable to expect it to stay as the best for a while, and to expect that the maker of said product isn't planning to supersede it in a few months' time.
It is guaranteed NOT to stay the top tier. Also, as of the xx9x\x9xx family we have sub-tieres in the top tier. 4090 is the low/mid top tier because 4090 ti is almost certainly exist.
If you consider "what is the best" and diciding product A vs. B according to that plus you are willing to pay more only to be entitled to that "the best" treet by itself than, well, you condemn yourself into a limbo of dissapointment.

"...then it's reasonable to expect it to stay as the best for a while"
This is a very naive, childish approche imo.
No one has or will guarantee you a time frame of "the best". Expecting such thing is way out of scope of product spec hace the gape leading to disappointment.

But to which is own I guess, just please don`t use that disappointment to bash any company. That being bias.
Posted on Reply
#57
Vayra86
nguyenHandicap vs being artificially handicapped are two separate things, seem like you can't tell the differrence.
It is the exact same thing. The result is that there is room in a stack for a less handicapped product.
Posted on Reply
#58
Snoop05
AusWolfThe 4K chart doesn't look much better, either:



If it's all about size, then why do they do the same with their lower-end chips, like the GA104?
Its quite some difference i would say.
Posted on Reply
#59
wolf
Performance Enthusiast
ValantarThe issue isn't the veracity of whether or not it "actually" is a top-tier GPU, but the inherently shitty move of selling something as "the best of the best" only to undermine this shortly afterwards with a "well, actually..." launch. If someone promises you through marketing that the product you're buying is the best, then it's reasonable to expect it to stay as the best for a while, and to expect that the maker of said product isn't planning to supersede it in a few months' time.
Not everything is some shitty move, the reasoning is pretty clear. The flagship GPU is so large that yields play into this, to make the volume they want to be able to sell given their market share, they can move vastly more with relatively minor defects and start to stockpile ones with no faults/good binning etc, then when enough exist to make a product viable, they'll sell it. How shitty of them.
Posted on Reply
#60
ratirt
Dirt Chip4090 is not the very top GPU you speak of. It is only a mere mid-top tier GPU.
The 4090 ti is the top-top tier (unless a 4090 super ti is coming) with the full fat and it`s cost, when released, will represent that- be sure abot that :)
NV is just using that "psychological human error" (read: being an average human) that mae you think that if the GPU isn`t whole than "it is crippled" to make even more profit.
We know that and that is not the point. The point is You can literally call anything handicapped and that is why I mentioned 4090 not fully enabled chip. And no, as of today it is the top tier GPU.
nguyenHandicap vs being artificially handicapped are two separate things, seem like you can't tell the differrence.
Either it is a frequency cap or power delivery cap or CU enabled in a chip all can come down to artificial handicap.
Posted on Reply
#61
AusWolf
Snoop05Its quite some difference i would say.
Different how? It's worse value than all but 4 available GPUs instead of 2?
wolfNot everything is some shitty move, the reasoning is pretty clear. The flagship GPU is so large that yields play into this, to make the volume they want to be able to sell given their market share, they can move vastly more with relatively minor defects and start to stockpile ones with no faults/good binning etc, then when enough exist to make a product viable, they'll sell it. How shitty of them.
But it's not just flagship GPUs they do this with, the GA104 being a good example.
Posted on Reply
#62
GamerGuy
I will be stuck here in Canada till February, so I'd have no choice but to wait, that would gimme the chance to go thru reviews and benchmarks by then, so I guess I'd be better placed to make an informed decision. I'm not too concerned about RT performance as long as it has improved a fair bit over my RX 6900 XT, the only game I wanna play with good framerate + RT is Metro Exodus PC Enhanced. I play it on my main rig at 3840x1080, and my RX 6900 XT does struggle with RT maxed outm so IF the RX 7900 XTX really does improve RT performance by a fair margin, I hope to play ME PC Enhanced again at full maxed out setting.....and net more than playable framerate.
Posted on Reply
#63
wolf
Performance Enthusiast
AusWolfBut it's not just flagship GPUs they do this with, the GA104 being a good example.
Still, the volume they need/want to ship on these products, they need to accept a small cut down to produce enough acceptable ones in the given timeframe, save the golden chips for the professional RTX products with more VRAM like the ampere A series products.
Posted on Reply
#64
Valantar
Dirt ChipIt is guaranteed NOT to stay the top tier. Also, as of the xx9x\x9xx family we have sub-tieres in the top tier. 4090 is the low/mid top tier because 4090 ti is almost certainly exist.
That is literally exactly what I said. That's the distinction between being the top-tier SKU and a top-tier SKU.
Dirt ChipIf you consider "what is the best" and diciding product A vs. B according to that plus you are willing to pay more only to be entitled to that "the best" treet by itself than, well, you condemn yourself into a limbo of dissapointment.
No. All purchases are a judgement of value for money, and when buying a product you cannot escape such judgements even if you explicitly don't care about value - it's there in literally every aspect of the thing. If you're paying extra for a top-tier product - which you inherently are by buying a flagship GPU - then you're putting your money and trust into a relationship with another party based on their messaging, i.e. marketing. If that company then abuses that trust by subsequently changing the basis for the agreement, then that's on them, not on the buyer.
Dirt Chip"...then it's reasonable to expect it to stay as the best for a while"
This is a very naive, childish approche imo.
What? No. A while is not a fixed amount of time. It is an inherently variable and flexible amount of time. That's the entire point. There's nothing naive about this, it is explicitly not naive, but has the expectation of its end built into it. The question is how and why that end comes about - whether it's reasonable, i.e. by a new generation arriving or significant refresh occurring, or whether it's unreasonable, i.e. through the manufacturer creating minuscule, arbitrary binnings in order to launch ever-higher priced premium SKUs.
Dirt ChipNo one has or will guarantee you a time frame of "the best". Expecting such thing is way out of scope of product spec hace the gape leading to disappointment.
No it isn't. The explicit promise of a flagship GPU is that it's the best GPU, either from that chipmaker or outright. Not that it will stay that way forever, not that it will stay that way for any given, fixed amount of time, but that it is so at the time and will stay that way until either the next generation or a significant mid-gen refresh.
Dirt ChipBut to which is own I guess, just please don`t use that disappointment to bash any company. That being bias.
Yes, of course, all criticism of shady marketing practices is bias, of course. Unbiased criticism is impossible! Such logic, much wow! It's not as if these companies have massive marketing budgets and spend hundreds of millions of dollars yearly to convince people to give them their hard-earned money, no, of course not :rolleyes: Seriously, if there's anyone arguing a woefully naïve stance here it's you with how you're implicitly pretending that humans are entirely rational actors and that how we are affected by our surroundings is optional. 'Cause if you're not arguing that, then the core of your argument here falls apart.
wolfNot everything is some shitty move, the reasoning is pretty clear. The flagship GPU is so large that yields play into this, to make the volume they want to be able to sell given their market share, they can move vastly more with relatively minor defects and start to stockpile ones with no faults/good binning etc, then when enough exist to make a product viable, they'll sell it. How shitty of them.
No, not everything is a shitty move, that's true. But you're giving a lot of benefit of the doubt here. An unreasonable amount IMO. Even if initial yields of fully enabled dice are bad - say 70%, which is borderline unacceptable for the chip industry - and that a further 50% of undamaged dice don't meet the top spec bin, what is stopping them from still making a retail SKU from the remaining 35% of chips?

The problem is, you're drawing up an unreasonable scenario. Nobody is saying Nvidia has to choose between either launching a fully enabled chip, or a cut down one. They could easily do both - supplies of either would just be slightly more limited. Instead they're choosing to only sell the cut-down part - which initially must include a lot of chips that could have been the top-end SKU, unless their yields are absolute garbage. Look at reports of Intel's fab woes. What yield rates are considered not economically viable? Even 70% is presented as bad. And 70% yields doesn't mean 70% usable chips, it means 70% fault-free chips.

A napkin math example: AD102 is a 608mm² almost-square die. As I couldn't find the specifics online, let's say it's 23x26.4mm (that's 607.2mm², close enough, but obviously not accurate). Let's plug that into Caly Technologies' die-per-wafer calculator (sadly only on the Wayback machinethese days). On a 300mm wafer, assuming TSMC's long-reported 0.09 defect rate (which should be roughly applicable for N4, as N4 is a variant of N5, and N5 is said to match N7 defect rates, which were 0.09 several years ago), that results in 87 total dice per wafer, of which ~35 would have defects, and 52 would be defect-free. Given how GPUs are massive arrays of identical hardware, it's likely that all dice with defects are usable in a cut-down form. Let's then assume that half of defect-free dice meet the binning requirements for a fully enabled SKU. That would leave Nvidia with three choices:

- Launch a cut-down flagship consumer SKU at a binning and active block level that lets them use all chips that don't meet binning criteria for a fully enabled chip, and sell all fully enabled chips in higher margin markets (enterprise/workstation etc.) - but also launch a fully enabled consumer SKU later
- Launch a fully enabled consumer SKU and a cut-down SKU at the same time, with the fully enabled SKU being somewhat limited in quantity and taking some supply away from the aforementioned higher margin markets
- Only ever launch a cut-down consumer SKU, leaving fully enabled chips only to other markets

Nvidia consistently picks the first option among these - the option that goes hard for maximizing profits above all else, while also necessarily including the iffy move of promising "this is the flagship" just to supersede it 6-12 months later. And that? That's a shitty move, IMO. Is it horrible? Of course not. But it's explicitly exploitative and cash-grabby at the expense of customers, which makes it shitty.
wolfStill, the volume they need/want to ship on these products, they need to accept a small cut down to produce enough acceptable ones in the given timeframe, save the golden chips for the professional RTX products with more VRAM like the ampere A series products.
That depends on die size and actual yields. As I showed above, with published yields for the process nodes used here, there are still lots of chips that would meet the criteria for fully enabled SKUs.
AusWolfDifferent how? It's worse value than all but 4 available GPUs instead of 2?
Also remember that that chart for some reason only assumes MSRP or lower rather than the expected and actual reality of prices being MSRP or higher.
Posted on Reply
#65
AusWolf
ValantarAlso remember that that chart for some reason only assumes MSRP or lower rather than the expected and actual reality of prices being MSRP or higher.
Yep. That only makes the picture worse.

Soon, buying Nvidia instead of AMD will be like buying the 500 HP Ferrari for $100k instead of the 500 HP Mustang for $35k. Or is it like that already?
Posted on Reply
#66
bug
AusWolfBut it's not just flagship GPUs they do this with, the GA104 being a good example.
Yields aren't dictated solely by the complexity of a chip. They're also (and even moreso) dictated by the maturity of the fabrication process.
Posted on Reply
#67
TheinsanegamerN
AusWolfYep. That only makes the picture worse.

Soon, buying Nvidia instead of AMD will be like buying the 500 HP Ferrari for $100k instead of the 500 HP Mustang for $35k. Or is it like that already?
That's a bit like comparing a NASCAR stock car to a F1 car.
Posted on Reply
#68
AusWolf
TheinsanegamerNThat's a bit like comparing a NASCAR stock car to a F1 car.
Except that Nascar stock cars look and sound WAAAAY better! :p
Posted on Reply
#69
Valantar
TheinsanegamerNThat's a bit like comparing a NASCAR stock car to a F1 car.
Isn't the classic distinction that a Mustang lets you go fast, but a Ferrari lets you go fast and go around corners without crashing?
Posted on Reply
#70
TheinsanegamerN
ValantarIsn't the classic distinction that a Mustang lets you go fast, but a Ferrari lets you go fast and go around corners without crashing?
Yeah, the mustangs of old were rocket ships, classic american muscle that went real fast, just dont try to turn. ferraris were much more track focused, like most euro designs. They also had torque, but just enough to spin the tires, not enough to shred them for track drifting. The Mustang followed more the mercedes SLS black of "POWAAAH".

The modern mustang is closer to the euro principle of fast and agile, but they still dont hold a candle to a ferrari.
Posted on Reply
#71
AusWolf
ValantarIsn't the classic distinction that a Mustang lets you go fast, but a Ferrari lets you go fast and go around corners without crashing?
It used to be, but nowadays, I think it's more like the distinction that a Mustang lets you go fast, but a Ferrari lets you go fast and look like a smug d***. :p

Sorry for the off.
Posted on Reply
#72
Valantar
AusWolfIt used to be, but nowadays, I think it's more like the distinction that a Mustang lets you go fast, but a Ferrari lets you go fast and look like a smug d***. :p

Sorry for the off.
I would say that a Mustang manages to do that exact thing just fine, but then that might just be my personal preferences :laugh:
Posted on Reply
#73
AnotherReader
wolfNot everything is some shitty move, the reasoning is pretty clear. The flagship GPU is so large that yields play into this, to make the volume they want to be able to sell given their market share, they can move vastly more with relatively minor defects and start to stockpile ones with no faults/good binning etc, then when enough exist to make a product viable, they'll sell it. How shitty of them.
Even with a die as large as the 4090, the yields would be close to 60% if N5 has the same defect rate as N7. We know that N5 is actually outperforming N7 for defect rate. For the 4080, the yields would be 70%. Put another way, even the 4090 will have more working dies than defective ones. This is a product segmentation move.
Posted on Reply
#74
spnidel
wolfWe get it, lots of you don't care about RT, you've only been shouting that from the rooftops since Turings debut, believe me, we hear you loud and clear.

Would ya'll please be mindful there is a subset of enthusiasts who do want good RT performance in future purchases, for it not to be an afterthought, a checkbox to tick.

If it's not for you, awesome, but I tire of hearing nObOdY cArEs ABouT rAy TrACinG when clearly people do, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is a gimmick or that 'most' people don't care, good on you!).

Personally I'd love to be able to very strongly consider AMD GPU's, but a prerequisite of that is for them to take RT more seriously, and lessen the hit, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.
We get it, lots of you care about RT, you've only been shouting that from the rooftops since Turings debut, believe me, we hear you loud and clear.

Would ya'll please be mindful there is a subset of enthusiasts who don't want RT in future purchases.

If it's for you, awesome, but I tire of hearing eVeRyBoDy cArEs ABouT rAy TrACinG when clearly people don't, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is not a gimmick or that 'most' people do care, good on you!).

Personally I'd love to be able to very strongly consider NVIDIA GPU's, but a prerequisite of that is for them to take RT less seriously, and lessen the power draw, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.
Posted on Reply
#75
skates
spnidelWe get it, lots of you care about RT, you've only been shouting that from the rooftops since Turings debut, believe me, we hear you loud and clear.

Would ya'll please be mindful there is a subset of enthusiasts who don't want RT in future purchases.

If it's for you, awesome, but I tire of hearing eVeRyBoDy cArEs ABouT rAy TrACinG when clearly people don't, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is not a gimmick or that 'most' people do care, good on you!).

Personally I'd love to be able to very strongly consider NVIDIA GPU's, but a prerequisite of that is for them to take RT less seriously, and lessen the power draw, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.
Put me in the don't care column for RT, just like people will put me in the don't care column when I get an 8K monitor with DP2.0 and I'm okay with that, to each their own, but being salty about folks who don't want RT is weird to me.
Posted on Reply
Add your own comment
Jun 3rd, 2024 10:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts