Tuesday, November 15th 2022
AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080
AMD in its technical presentation confirmed the reference clock speeds of the Radeon RX 7900 XTX and RX 7900 XT RDNA3 graphics cards. The company also made its first reference to a GeForce RTX 40-series "Ada" product, the RTX 4080 (16 GB), which is going to launch later today. The RX 7900 XTX maxes out the "Navi 31" silicon, featuring all 96 RDNA3 compute units or 6,144 stream processors; while the RX 7900 XT is configured with 84 compute units, or 5,376 stream processors. The two cards also differ with memory configuration. While the RX 7900 XTX gets 24 GB of 20 Gbps GDDR6 across a 384-bit memory interface (960 GB/s); the RX 7900 XT gets 20 GB of 20 Gbps GDDR6 across 320-bit (800 GB/s).
The RX 7900 XTX comes with a Game Clocks frequency of 2300 MHz, and 2500 MHz boost clocks, whereas the RX 7900 XT comes with 2000 MHz Game Clocks, and 2400 MHz boost clocks. The Game Clocks frequency is more relevant between the two. AMD achieves 20 GB memory on the RX 7900 XT by using ten 16 Gbit GDDR6 memory chips across a 320-bit wide memory bus created by disabling one of the six 64-bit MCDs, which also subtracts 16 MB from the GPU's 96 MB Infinity Cache memory, leaving the RX 7900 XT with 80 MB of it. The slide describing the specs of the two cards compares them to the GeForce RTX 4080, which is what the two could compete more against, especially given their pricing. The RX 7900 XTX is 16% cheaper than the RTX 4080, and the RX 7900 XT is 25% cheaper.
The RX 7900 XTX comes with a Game Clocks frequency of 2300 MHz, and 2500 MHz boost clocks, whereas the RX 7900 XT comes with 2000 MHz Game Clocks, and 2400 MHz boost clocks. The Game Clocks frequency is more relevant between the two. AMD achieves 20 GB memory on the RX 7900 XT by using ten 16 Gbit GDDR6 memory chips across a 320-bit wide memory bus created by disabling one of the six 64-bit MCDs, which also subtracts 16 MB from the GPU's 96 MB Infinity Cache memory, leaving the RX 7900 XT with 80 MB of it. The slide describing the specs of the two cards compares them to the GeForce RTX 4080, which is what the two could compete more against, especially given their pricing. The RX 7900 XTX is 16% cheaper than the RTX 4080, and the RX 7900 XT is 25% cheaper.
166 Comments on AMD Confirms Radeon RX 7900 Series Clocks, Direct Competition with RTX 4080
The 4090 ti is the top-top tier (unless a 4090 super ti is coming) with the full fat and it`s cost, when released, will represent that- be sure abot that :)
NV is just using that "psychological human error" (read: being an average human) that mae you think that if the GPU isn`t whole than "it is crippled" to make even more profit. The emotional, psychological aspect is playing a major role and we all can see it very clearly in the forum.
Make you change, sometime completely, your choice against solid data and proven facts.
Every business company that respect itself will exploit this 'merit' to the max. NV and Intel are specifically excel in that explointment, AMD still have some miles to cover but it is doing very good to catch up.
If you consider "what is the best" and diciding product A vs. B according to that plus you are willing to pay more only to be entitled to that "the best" treet by itself than, well, you condemn yourself into a limbo of dissapointment.
"...then it's reasonable to expect it to stay as the best for a while"
This is a very naive, childish approche imo.
No one has or will guarantee you a time frame of "the best". Expecting such thing is way out of scope of product spec hace the gape leading to disappointment.
But to which is own I guess, just please don`t use that disappointment to bash any company. That being bias.
The problem is, you're drawing up an unreasonable scenario. Nobody is saying Nvidia has to choose between either launching a fully enabled chip, or a cut down one. They could easily do both - supplies of either would just be slightly more limited. Instead they're choosing to only sell the cut-down part - which initially must include a lot of chips that could have been the top-end SKU, unless their yields are absolute garbage. Look at reports of Intel's fab woes. What yield rates are considered not economically viable? Even 70% is presented as bad. And 70% yields doesn't mean 70% usable chips, it means 70% fault-free chips.
A napkin math example: AD102 is a 608mm² almost-square die. As I couldn't find the specifics online, let's say it's 23x26.4mm (that's 607.2mm², close enough, but obviously not accurate). Let's plug that into Caly Technologies' die-per-wafer calculator (sadly only on the Wayback machinethese days). On a 300mm wafer, assuming TSMC's long-reported 0.09 defect rate (which should be roughly applicable for N4, as N4 is a variant of N5, and N5 is said to match N7 defect rates, which were 0.09 several years ago), that results in 87 total dice per wafer, of which ~35 would have defects, and 52 would be defect-free. Given how GPUs are massive arrays of identical hardware, it's likely that all dice with defects are usable in a cut-down form. Let's then assume that half of defect-free dice meet the binning requirements for a fully enabled SKU. That would leave Nvidia with three choices:
- Launch a cut-down flagship consumer SKU at a binning and active block level that lets them use all chips that don't meet binning criteria for a fully enabled chip, and sell all fully enabled chips in higher margin markets (enterprise/workstation etc.) - but also launch a fully enabled consumer SKU later
- Launch a fully enabled consumer SKU and a cut-down SKU at the same time, with the fully enabled SKU being somewhat limited in quantity and taking some supply away from the aforementioned higher margin markets
- Only ever launch a cut-down consumer SKU, leaving fully enabled chips only to other markets
Nvidia consistently picks the first option among these - the option that goes hard for maximizing profits above all else, while also necessarily including the iffy move of promising "this is the flagship" just to supersede it 6-12 months later. And that? That's a shitty move, IMO. Is it horrible? Of course not. But it's explicitly exploitative and cash-grabby at the expense of customers, which makes it shitty. That depends on die size and actual yields. As I showed above, with published yields for the process nodes used here, there are still lots of chips that would meet the criteria for fully enabled SKUs. Also remember that that chart for some reason only assumes MSRP or lower rather than the expected and actual reality of prices being MSRP or higher.
Soon, buying Nvidia instead of AMD will be like buying the 500 HP Ferrari for $100k instead of the 500 HP Mustang for $35k. Or is it like that already?
The modern mustang is closer to the euro principle of fast and agile, but they still dont hold a candle to a ferrari.
Sorry for the off.
Would ya'll please be mindful there is a subset of enthusiasts who don't want RT in future purchases.
If it's for you, awesome, but I tire of hearing eVeRyBoDy cArEs ABouT rAy TrACinG when clearly people don't, so please, stop (silly to even ask I know, this will probably make the vocal among you double down and write me an essay on why RT is not a gimmick or that 'most' people do care, good on you!).
Personally I'd love to be able to very strongly consider NVIDIA GPU's, but a prerequisite of that is for them to take RT less seriously, and lessen the power draw, and they can certainly swing even more buyers there way if they deliver on that, so I eagerly wait to see if the top product has made significant strides.