Tuesday, December 3rd 2024

AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

AMD's upcoming Radeon RX 8000 series GPUs based on RDNA 4 architecture are just around the corner, with rumors pointing to a CES unveiling event. Today, we are learning that the Radeon RX 8800 XT GPU will feature a 220 W TDP, compared to its Radeon RX 7800 XT predecessor with 263 W TDP, thanks to the Seasonic wattage calculator. While we expect to see better nodes used for making RNDA 4, the efficiency gains stem primarily from the improved microarchitectural design of the new RDNA generation. The RX 8800 XT will bring better performance while lowering power consumption by 16%. While no concrete official figures are known about RNDA 4 performance targets compared to RDNA, if AMD plans to maintain the competitive mid-range landscape with NVIDIA "Blackwell" and, as of today, Intel with Arc "Battlemage," team red must put out a good fight to remain competitive.

We reported on AMD Radeon RX 8800 XT entering mass production this month, with notable silicon design a departure from previous designs. The RX 8800 XT will reportedly utilize a monolithic chip dubbed "Navi 48," moving away from the chiplet-based approach seen in the current "Navi 31" and "Navi 32" GPUs. Perhaps most intriguing are claims about the card's ray tracing capabilities. Sources suggest the RX 8800 XT will match the NVIDIA GeForce RTX 4080/4080 SUPER in raster performance while having a remarkable 45% improvement over the current flagship RX 7900 XTX in ray tracing. However, these claims must be backed by independent testing first, as performance improvements depend on the specific case, like games optimized for either AMD or NVIDIA yield better results for the favorable graphics card.
Sources: Seasonic Wattage Calculator, via Tom's Hardware
Add your own comment

122 Comments on AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

#26
tussinman
sbaccThe last 10 years, every time a new GPU arch from AMD launch we get those wild overhyped rumor about a crazy good product, just to get something ranging from just "okay" at worse to somewhat good at best, but the problem is even when it's the later we get (somewhat good), everyone is massively disappointed and the product is labelled as a complete failure.

I'll say let's temper expectations, we are only one month away to actually know the product. So maybe this time, we can judge it to its real value.
Agreed. I remember and year and a half ago there was all these rumors of the 7700XT offering 6900XT level performance for $350-400.

Card ended up being close to $500 at launch and could barely beat the 6800 non XT. 7700XT ended up having more power consumption and less Vram than the 6800 as well........
Posted on Reply
#27
Lycanwolfen
220 watts nice, Next Nvidia card 6000 series will be using 2200 watts and will need your own power transformer outside to run it.
Posted on Reply
#28
kapone32
TheinsanegamerNNo it wasnt. Drivers didnt stop mattering just because polaris existed. By the time of polaris crossfire was a nearly dead technology.

The problem with crossfire (and SLI) was never the interconnects or experience. It was, and always will be, driver support, which fell on AMD/nvidia and was an absolute royal pain to fix for the small marketshare they had. With DX11, the traditional methods of multi GPU became nearly impossible to implement.

DX12 has long supported multiGPU. It is on game developers now to enable and support it. Nothing "political" about it, game devs dont see the value for the niche base that wants it. It's not on AMD to enable that.
Well that is your opinion. I enjoyed Crossfire support so much that most of the Games I bought at that time supported Crossfire. Mutli GPU is not the same thing as crossfire and has no impact on Games. Ashes of the Singularity is the only Game I know that just supports Multi GPU native. The thing with Polaris was that Crossfire was at the Driver level so if the Game supported it it worked and if not the other card would basically be turned off.
Posted on Reply
#29
Krit
RTX 4080 performance is better than i thought. But still it's just a rumor.
That mid range gpu should cost no more than 500-550€. At 700€ it will not sell in big numbers and market share will not improve! Strike while iron is red hot like 8800GT did in just one year time period after
8800GTX launch.
Posted on Reply
#30
Neo_Morpheus
Vayra86I mean we keep saying RDNA3 is all meh, but realistically, it performs admirably,
Actually, i dont think that we have seen what the architecture can really do, since i dont think that anyone has released a game that truly utilizes the hardware.

For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
Posted on Reply
#31
AnotherReader
Neo_MorpheusActually, i dont think that we have seen what the architecture can really do, since i dont think that anyone has released a game that truly utilizes the hardware.

For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
That isn't on the games. AMD's compiler is supposed to handle that, but compilers are far from perfect. Game ready drivers with hand optimized code would be the way to handle this. Dual issue also has some restrictions; for example, it only applies to instructions with two sources and one destination. Consequently, the common FMA (fused multiply add) operation is excluded. Other restrictions are listed in the Chips and Cheese article on RDNA 3.
Dual issue opportunities are further limited by available execution units, data dependencies, and register file bandwidth. Operands in the same position can’t read from the same register bank. Another limitation applies to the destination registers, which can’t be both even or both odd.
Posted on Reply
#32
AcE
Neo_MorpheusActually, i dont think that we have seen what the architecture can really do, since i dont think that anyone has released a game that truly utilizes the hardware.

For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
A few games like Starfield did, absolute AMD games and the performance then was ridiculous compared to even 4090. But those are only a few games i can count them on a hand, other example would be Avatar I think.

I think this will be in ballpark of 7900 XT with RT performance comparable to 4080 or a bit lower, but we will see soon. Pricing I expect 500-600 not more.
Posted on Reply
#33
Krit
AnotherReaderGiven the rumoured specifications, 4080 performance is very unlikely. Going by the numbers in the latest GPU review, the 4080 is 42% faster than the 7800 XT at 1440p and 49% faster at 4K. That is too great a gap to be overcome by a 6.7% increase in Compute Units.
It's very unlikely but not impossible!

GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster if not even more if compare to its predecessor 7800 GT (hard to find actual information in direct comparison)
Posted on Reply
#34
AnotherReader
KritIt's very unlikely but not impossible!

GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster or even more if compared to 7800 GT
In all of these cases, the faster successor used a more advanced node than its predecessor. These are the nodes:

GTX 970 and 980 Ti: 28 nm, GTX 1070 and 1080 Ti: 16 nm (first TSMC finfet node)
7800 GT: 110 nm, 8800 GT: 65 nm

RDNA 4 doesn't have the luxury of a smaller node than its predecessor.
Posted on Reply
#35
GodisanAtheist
KritIt's very unlikely but not impossible!

GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster if not even more if compare to its predecessor 7800 GT (hard to find actual information in direct comparison)
- Thing is those were all die shrinks (1070/1080Ti was actually a double shrink) back when that really meant something.

The N4P process the 8800XT is using is just an space/power optimized N5 process that N31 and N21's GCDs used. It'll help a little bit, like 10% additional performance will come from the better process, but it's not going to work miracles by a longshot.
Posted on Reply
#36
oxrufiioxo
AnotherReaderIn all of these cases, the faster successor used a more advanced node than its predecessor. These are the nodes:

GTX 970 and 980 Ti: 28 nm, GTX 1070 and 1080 Ti: 16 nm (first TSMC finfet node)
7800 GT: 110 nm, 8800 GT: 65 nm

RDNA 4 doesn't have the luxury of a smaller node than its predecessor.
680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
Posted on Reply
#37
AcE
oxrufiioxo680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
680 to 980 was a bigger chip and new arch, so not that special. AMD did something comparable with Radeon VII to 5700 XT, smaller chip with comparable performance and better efficiency. Also HD 7970 to 290X was a decent jump on same node with a minor increase in size, same as with 680 to 980. They all just cook with water.
Posted on Reply
#38
GodisanAtheist
oxrufiioxo680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
- AMD got some massive gains going from RDNA (7nm) to RDNA2 (7nm). N10 (5700XT) was 250mm2 while N23 (6700XT) was 270mm2 and ~40% faster.

It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
Posted on Reply
#39
3valatzy
GodisanAtheist- AMD got some massive gains going from RDNA (7nm) to RDNA2 (7nm). N10 (5700XT) was 250mm2 while N23 (6700XT) was 270mm2 and ~40% faster.

It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
Radeon RX 6700 XT is Navi 22 and is 335 mm2. So, compared to Navi 10 - 35% faster for 34% larger die area, and 67% more transistors. 12 GB vs 8 GB, too..
Posted on Reply
#40
Jism
GodisanAtheist- AMD got some massive gains going from RDNA (7nm) to RDNA2 (7nm). N10 (5700XT) was 250mm2 while N23 (6700XT) was 270mm2 and ~40% faster.

It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
Chiplets is just not ready, unless they find a way to tackle latency's.

Nvidia is going huge with their die sizes, and high cost per wafer, AMD on the other hand is making chiplets which have a less high failure rate.

I still stick to my 6700XT. It's one of the few generations that has not been gimped out by Morepowertools (from 180W to 250W).
Posted on Reply
#41
yfn_ratchet
Smeh... I doubt there's anything in this product stack that would catch my attention. I'd probably be salivating over the blazing-hot sales of RX 7000 when these drop. Can you imagine an RX 7800XT down $75? Because that sounds like a daydream to me.
Posted on Reply
#42
wolf
Better Than Native
I don't think the product will get trashed if the rumours don't all add up to miraculous levels, all AMD need to do is reasonably deliver on the day. They can land another banger, they've done it before and then can do it again, especially since virtually any shortcomings can be forgiven with sharp pricing.

Product launches with XYZ performance and spec characteristics, and a given price. Then, provided there are no straight up bugs or issues, it will be praised, meh'd or trashed based on that. Real tangible metrics, not weighted against lofty rumors.

The exception to this is if the company itself misleads consumers as to expected performance/price.

Some people take it way too personally when a product from their favourite company isn't met with universal praise, when the reality is the vast majority of how the product is perceived was up to said company to get right. And, they need to get it right on day 1, not with price cuts or bug fixes (for example) weeks to months later, the damage is done at launch.
Posted on Reply
#43
oxrufiioxo
wolfI don't think the product will get trashed if the rumours don't all add up to miraculous levels, all AMD need to do is reasonably deliver on the day. They can land another banger, they've done it before and then can do it again, especially since virtually any shortcomings can be forgiven with sharp pricing.

Product launches with XYZ performance and spec characteristics, and a given price. Then, provided there are no straight up bugs or issues, it will be praised, meh'd or trashed based on that. Real tangible metrics, not weighted against lofty rumors.

The exception to this is if the company itself misleads consumers as to expected performance/price.

Some people take it way too personally when a product from their favourite company isn't met with universal praise, when the reality is the vast majority of how the product is perceived was up to said company to get right. And, they need to get it right on day 1, not with price cuts or bug fixes (for example) weeks to months later, the damage is done at launch.
While I agree in my own personal view of a product. AMD fanboys will get super hyped over unrealistic rumors over and over again and then AMD will just do what they do offer a slightly inferior product at a discount vs whatever Nvidia offers in the price range.

I doubt we will see another 4000 series situation from them the last time they offered a killer product at a killer price. Now will this drop like a rock at retail and eventually be a solid buy sure.
Posted on Reply
#44
Punkenjoy
JismChiplets is just not ready, unless they find a way to tackle latency's.

Nvidia is going huge with their die sizes, and high cost per wafer, AMD on the other hand is making chiplets which have a less high failure rate.

I still stick to my 6700XT. It's one of the few generations that has not been gimped out by Morepowertools (from 180W to 250W).
GPU aren't really sensitive about latency.

The situation is a bit different than on CPU side.

The two main thing that killed RDNA3 is :
  • Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
  • And most importantly. chiplets are great when they give you a competitive advantage on cost. Unlike CPU, they can't sell RDNA3 dies to datacenter market since that spot is taken by CDNA. The added complexity also increase cost meaning unless you want to reduce greatly your margin, you have have to price those higher.
If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.

The benefits of doing chiplets was to deliver more silicon at a lower cost. Well 4090 is 76.3 Millions transistors with a die size of 609mm2 were the 7900XTX has a total of 57.7 millions transistors with a total die size of 529 mm2.

On that, the main die, the GCD is only 304 mm2 and 45 millions transistors.

The right opponent of the 7900 XTX is the 4080 at 45.9 millions transistors. About the same for the main die, and you add those much cheaper MCD on the side. If AMD went all out with a 500 mm2 GCD die, things could have really been different.

Nvidia went all out. AMD didn't and that is why they lost that generation. The main advantages was that the 4090 dies could be sold also to Datacenter and AI. AMD was only focussing on gaming. It's now obvious but they were set to lose that generation from the start.
Posted on Reply
#45
eidairaman1
The Exiled Airman
TheinsanegamerNNo it wasnt. Drivers didnt stop mattering just because polaris existed. By the time of polaris crossfire was a nearly dead technology.

The problem with crossfire (and SLI) was never the interconnects or experience. It was, and always will be, driver support, which fell on AMD/nvidia and was an absolute royal pain to fix for the small marketshare they had. With DX11, the traditional methods of multi GPU became nearly impossible to implement.

DX12 has long supported multiGPU. It is on game developers now to enable and support it. Nothing "political" about it, game devs dont see the value for the niche base that wants it. It's not on AMD to enable that.
Yup because most game devs suffer consolitis which are an igp/apu on a single mainboard with a cpu like most mobiles today but with dedicated memory, not a dgpu.

I remember when nfsu came out, it was same graphics quality between both the gf fx 5200 and the XBox, if you had a Radeon 9700 Pro, you could max that game out graphically and it was friggin beautiful and played excellent on pc.
Posted on Reply
#46
AcE
GodisanAtheistIt was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
You only have so many Maxwell moments, and it all just happened because both companies used suboptimal architectures and that won't happen again because they learn. With Nvidia it was Kepler, which was beaten by GCN2 later, and with AMD it was GCN2/3 which was beaten by Maxwell and was too inefficient and blown up, in general GCN was, all versions performed suboptimally unless you used low level API (DX12/VK) or Asynchronous Compute, which both led to the huge engine being used properly, especially true with Fury X. It was either this or use very high resolutions, for Fury X it was 4K for example, way too many shaders and a suboptimal driver for DX11 which had issues filling it.
JismChiplets is just not ready, unless they find a way to tackle latency's.
The latency cost it performance, but the main issue was that Nvidia is just too rich too good, basically AMD had a 4080 with 384 Bit instead of 256 bit and other superfluous parts vs a huge 4090 chip which AMD could never compete with, way more transistors, and if you calculate that 5/6nm Chiplet mix into 5nm pure, it's just about 450-480mm² vs 600mm² ADA102, so no chance competing with a smaller GPU. Exactly that size also was missing in performance, about 20-30%. No surprises and no dark magic here, Nvidia is not doing anything special, just investing more money. It's the upside of concentrating on 1 product, GPU and not just doing it on the side like AMD does, their main business is still CPU. GPUs are only very good at AMD in Datacenter, Instinct, not in consumer stuff. But they are trying to consolidate that with UDNA, just like Nvidia does since a long time, at least Volta times.
PunkenjoyThe two main thing that killed RDNA3 is :
  • Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
Increased power usage doesnt matter, I already said it could not compete because it was smaller. It competed well with the 4080 and that's it, but too expensive to produce for the price. Remember, 4080 way smaller chip with smaller bus vs the 7900 XTX which was clearly bigger and more expensive bus config, for MORE money - it didn't sell well, the 4080, but it still sold better than XTX.

The efficiency of RDNA 3 was still good, so that was not the issue. Yes Nvidias efficiency was naturally better with pure 5nm vs 5/6nm mix, but not far off.
PunkenjoyIf RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.
They will never be, AMD is a mixed processor company and Nvidia is purely GPU (nearly, aside from the few small ARM cpus they make), so ofc Nvidia will do all-in whereas AMD will always be more concentrated on multiple things and more on their traditional CPU business. Ryzen is in fact the GeForce of CPUs and has the same (toxic) mind share at times.
PunkenjoyNvidia went all out. AMD didn't and that is why they lost that generation.
AMD never won against Nvidia since over 15 years, and back then in HD 5000 times it only happened because GTX 400 was a hot and loud disaster. Funny enough that was a mid size chip with new node beating a huge chip of Nvidia, and the older huge chips of Nvidia on a older node (GTX 200 and 400). The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD. So in other words, the GPU branch was still named "ATI" when AMD had a W against Nvidia, and the HD 5850/5870 sold out as well.
Posted on Reply
#47
nguyen
PunkenjoyGPU aren't really sensitive about latency.

The situation is a bit different than on CPU side.

The two main thing that killed RDNA3 is :
  • Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
  • And most importantly. chiplets are great when they give you a competitive advantage on cost. Unlike CPU, they can't sell RDNA3 dies to datacenter market since that spot is taken by CDNA. The added complexity also increase cost meaning unless you want to reduce greatly your margin, you have have to price those higher.
If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.

The benefits of doing chiplets was to deliver more silicon at a lower cost. Well 4090 is 76.3 Millions transistors with a die size of 609mm2 were the 7900XTX has a total of 57.7 millions transistors with a total die size of 529 mm2.

On that, the main die, the GCD is only 304 mm2 and 45 millions transistors.

The right opponent of the 7900 XTX is the 4080 at 45.9 millions transistors. About the same for the main die, and you add those much cheaper MCD on the side. If AMD went all out with a 500 mm2 GCD die, things could have really been different.

Nvidia went all out. AMD didn't and that is why they lost that generation. The main advantages was that the 4090 dies could be sold also to Datacenter and AI. AMD was only focussing on gaming. It's now obvious but they were set to lose that generation from the start.
You meant Nvidia went all out by reserving the fully enabled AD102 chip (with 33% more L2 cache than 4090) for the 7000usd Quadro GPU? :roll: .

AMD could went all out with 500mm2 GCD and performance would barely change since they hit a bandwidth ceiling (they would need 512bit bus, which would make things more complicated). If it were as easy as making bigger GCD then AMD would have done so within these past 2 years instead of abandoning high end and go for mainstream segment with 8800 XT
Posted on Reply
#48
AcE
nguyenAMD could went all out with 500mm2 GCD and performance would barely change since they hit a bandwidth ceiling (they would need 512bit bus, which would make things more complicated). If it were as easy as making bigger GCD then AMD would have done so within these past 2 years instead of abandoning high end and go for mainstream segment with 8800 XT
Actually not even needed, they could've increased L2 Cache sizes as well and not used 512 bit, but AMD always stops at about 500-550mm² (only Fury X was an exception), so this was never in the books. The only possible thing was going full monolithic, then you have a bit more shaders and better latency because you can avoid the interconnect downsides but that's probably still not enough to match 4090 let alone the full chip which Nvidia would have 100% released if AMD were too strong.
Posted on Reply
#49
AnotherReader
oxrufiioxo680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
Both Maxwell, and to a lesser extent, the much derided Turing were excellent jumps on the same node. AMD has also done so in the past with the HD4000 series and RDNA 2. Even Fury X and the 290X saw fairly significant gains on their predecessors without changing the node.
Posted on Reply
#50
GodisanAtheist
3valatzyRadeon RX 6700 XT is Navi 22 and is 335 mm2. So, compared to Navi 10 - 35% faster for 34% larger die area, and 67% more transistors. 12 GB vs 8 GB, too..
-Whups, you're right, got the numbers mixed up in my head.
Posted on Reply
Add your own comment
Dec 12th, 2024 08:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts