Tuesday, December 3rd 2024
AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency
AMD's upcoming Radeon RX 8000 series GPUs based on RDNA 4 architecture are just around the corner, with rumors pointing to a CES unveiling event. Today, we are learning that the Radeon RX 8800 XT GPU will feature a 220 W TDP, compared to its Radeon RX 7800 XT predecessor with 263 W TDP, thanks to the Seasonic wattage calculator. While we expect to see better nodes used for making RNDA 4, the efficiency gains stem primarily from the improved microarchitectural design of the new RDNA generation. The RX 8800 XT will bring better performance while lowering power consumption by 16%. While no concrete official figures are known about RNDA 4 performance targets compared to RDNA, if AMD plans to maintain the competitive mid-range landscape with NVIDIA "Blackwell" and, as of today, Intel with Arc "Battlemage," team red must put out a good fight to remain competitive.
We reported on AMD Radeon RX 8800 XT entering mass production this month, with notable silicon design a departure from previous designs. The RX 8800 XT will reportedly utilize a monolithic chip dubbed "Navi 48," moving away from the chiplet-based approach seen in the current "Navi 31" and "Navi 32" GPUs. Perhaps most intriguing are claims about the card's ray tracing capabilities. Sources suggest the RX 8800 XT will match the NVIDIA GeForce RTX 4080/4080 SUPER in raster performance while having a remarkable 45% improvement over the current flagship RX 7900 XTX in ray tracing. However, these claims must be backed by independent testing first, as performance improvements depend on the specific case, like games optimized for either AMD or NVIDIA yield better results for the favorable graphics card.
Sources:
Seasonic Wattage Calculator, via Tom's Hardware
We reported on AMD Radeon RX 8800 XT entering mass production this month, with notable silicon design a departure from previous designs. The RX 8800 XT will reportedly utilize a monolithic chip dubbed "Navi 48," moving away from the chiplet-based approach seen in the current "Navi 31" and "Navi 32" GPUs. Perhaps most intriguing are claims about the card's ray tracing capabilities. Sources suggest the RX 8800 XT will match the NVIDIA GeForce RTX 4080/4080 SUPER in raster performance while having a remarkable 45% improvement over the current flagship RX 7900 XTX in ray tracing. However, these claims must be backed by independent testing first, as performance improvements depend on the specific case, like games optimized for either AMD or NVIDIA yield better results for the favorable graphics card.
122 Comments on AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency
Card ended up being close to $500 at launch and could barely beat the 6800 non XT. 7700XT ended up having more power consumption and less Vram than the 6800 as well........
That mid range gpu should cost no more than 500-550€. At 700€ it will not sell in big numbers and market share will not improve! Strike while iron is red hot like 8800GT did in just one year time period after
8800GTX launch.
For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
I think this will be in ballpark of 7900 XT with RT performance comparable to 4080 or a bit lower, but we will see soon. Pricing I expect 500-600 not more.
GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster if not even more if compare to its predecessor 7800 GT (hard to find actual information in direct comparison)
GTX 970 and 980 Ti: 28 nm, GTX 1070 and 1080 Ti: 16 nm (first TSMC finfet node)
7800 GT: 110 nm, 8800 GT: 65 nm
RDNA 4 doesn't have the luxury of a smaller node than its predecessor.
The N4P process the 8800XT is using is just an space/power optimized N5 process that N31 and N21's GCDs used. It'll help a little bit, like 10% additional performance will come from the better process, but it's not going to work miracles by a longshot.
It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
Nvidia is going huge with their die sizes, and high cost per wafer, AMD on the other hand is making chiplets which have a less high failure rate.
I still stick to my 6700XT. It's one of the few generations that has not been gimped out by Morepowertools (from 180W to 250W).
Product launches with XYZ performance and spec characteristics, and a given price. Then, provided there are no straight up bugs or issues, it will be praised, meh'd or trashed based on that. Real tangible metrics, not weighted against lofty rumors.
The exception to this is if the company itself misleads consumers as to expected performance/price.
Some people take it way too personally when a product from their favourite company isn't met with universal praise, when the reality is the vast majority of how the product is perceived was up to said company to get right. And, they need to get it right on day 1, not with price cuts or bug fixes (for example) weeks to months later, the damage is done at launch.
I doubt we will see another 4000 series situation from them the last time they offered a killer product at a killer price. Now will this drop like a rock at retail and eventually be a solid buy sure.
The situation is a bit different than on CPU side.
The two main thing that killed RDNA3 is :
- Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
- And most importantly. chiplets are great when they give you a competitive advantage on cost. Unlike CPU, they can't sell RDNA3 dies to datacenter market since that spot is taken by CDNA. The added complexity also increase cost meaning unless you want to reduce greatly your margin, you have have to price those higher.
If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.The benefits of doing chiplets was to deliver more silicon at a lower cost. Well 4090 is 76.3 Millions transistors with a die size of 609mm2 were the 7900XTX has a total of 57.7 millions transistors with a total die size of 529 mm2.
On that, the main die, the GCD is only 304 mm2 and 45 millions transistors.
The right opponent of the 7900 XTX is the 4080 at 45.9 millions transistors. About the same for the main die, and you add those much cheaper MCD on the side. If AMD went all out with a 500 mm2 GCD die, things could have really been different.
Nvidia went all out. AMD didn't and that is why they lost that generation. The main advantages was that the 4090 dies could be sold also to Datacenter and AI. AMD was only focussing on gaming. It's now obvious but they were set to lose that generation from the start.
I remember when nfsu came out, it was same graphics quality between both the gf fx 5200 and the XBox, if you had a Radeon 9700 Pro, you could max that game out graphically and it was friggin beautiful and played excellent on pc.
The efficiency of RDNA 3 was still good, so that was not the issue. Yes Nvidias efficiency was naturally better with pure 5nm vs 5/6nm mix, but not far off. They will never be, AMD is a mixed processor company and Nvidia is purely GPU (nearly, aside from the few small ARM cpus they make), so ofc Nvidia will do all-in whereas AMD will always be more concentrated on multiple things and more on their traditional CPU business. Ryzen is in fact the GeForce of CPUs and has the same (toxic) mind share at times. AMD never won against Nvidia since over 15 years, and back then in HD 5000 times it only happened because GTX 400 was a hot and loud disaster. Funny enough that was a mid size chip with new node beating a huge chip of Nvidia, and the older huge chips of Nvidia on a older node (GTX 200 and 400). The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD. So in other words, the GPU branch was still named "ATI" when AMD had a W against Nvidia, and the HD 5850/5870 sold out as well.
AMD could went all out with 500mm2 GCD and performance would barely change since they hit a bandwidth ceiling (they would need 512bit bus, which would make things more complicated). If it were as easy as making bigger GCD then AMD would have done so within these past 2 years instead of abandoning high end and go for mainstream segment with 8800 XT