Tuesday, December 3rd 2024

AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

AMD's upcoming Radeon RX 8000 series GPUs based on RDNA 4 architecture are just around the corner, with rumors pointing to a CES unveiling event. Today, we are learning that the Radeon RX 8800 XT GPU will feature a 220 W TDP, compared to its Radeon RX 7800 XT predecessor with 263 W TDP, thanks to the Seasonic wattage calculator. While we expect to see better nodes used for making RNDA 4, the efficiency gains stem primarily from the improved microarchitectural design of the new RDNA generation. The RX 8800 XT will bring better performance while lowering power consumption by 16%. While no concrete official figures are known about RNDA 4 performance targets compared to RDNA, if AMD plans to maintain the competitive mid-range landscape with NVIDIA "Blackwell" and, as of today, Intel with Arc "Battlemage," team red must put out a good fight to remain competitive.

We reported on AMD Radeon RX 8800 XT entering mass production this month, with notable silicon design a departure from previous designs. The RX 8800 XT will reportedly utilize a monolithic chip dubbed "Navi 48," moving away from the chiplet-based approach seen in the current "Navi 31" and "Navi 32" GPUs. Perhaps most intriguing are claims about the card's ray tracing capabilities. Sources suggest the RX 8800 XT will match the NVIDIA GeForce RTX 4080/4080 SUPER in raster performance while having a remarkable 45% improvement over the current flagship RX 7900 XTX in ray tracing. However, these claims must be backed by independent testing first, as performance improvements depend on the specific case, like games optimized for either AMD or NVIDIA yield better results for the favorable graphics card.
Sources: Seasonic Wattage Calculator, via Tom's Hardware
Add your own comment

66 Comments on AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

#51
Nhonho
AleksandarKRadeon RX 8000 series GPUs
Can the TPU team review the image quality of videos encoded in AV1 with the new VGAs from Intel, AMD and Nvidia? Please...

Is it possible to encode videos in 2 passes using the GPU? If so, which app does it?
Posted on Reply
#52
Vayra86
kapone32Well that is your opinion. I enjoyed Crossfire support so much that most of the Games I bought at that time supported Crossfire. Mutli GPU is not the same thing as crossfire and has no impact on Games. Ashes of the Singularity is the only Game I know that just supports Multi GPU native. The thing with Polaris was that Crossfire was at the Driver level so if the Game supported it it worked and if not the other card would basically be turned off.
That 'thing' was bog standard for every crossfire and SLI capable GPU. Which meant most of the time you would clearly notice that you would actually run on one card, and if you didnt, it was clearly audible too because of all the extra heat and noise :)

Driver for Nvidia even lets me pick SFR or AFR. Not that it matters though; no game support = you are looking at something arcane that doesnt work or literally renders half the screen.
Posted on Reply
#53
Ravenlord
All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
Posted on Reply
#54
3valatzy
NhonhoCan the TPU team review the image quality of videos encoded in AV1 with the new VGAs from Intel, AMD and Nvidia? Please...

Is it possible to encode videos in 2 passes using the GPU? If so, which app does it?
That, but also a standard image quality testing in order to see which brand of cards cheats with the textures and which does not (yeah, am looking at you, Nvidia :D)

RavenlordAll rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
It is about smart engineering and AI. I know there is no smart engineering at AMD, but maybe it will be their first time to implement it.
It's called undervolting, it is pretty simple and straight-forward, can be easily done at the factory. The trade-off - you lose 2% of performance, but your cards get lowered TDP from 300W to some sane 180W..
Posted on Reply
#55
AnotherReader
RavenlordAll rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
Two 8 pin connectors are used for the 7700 XT as well which isn't even a 250 W card.
Posted on Reply
#56
kapone32
Vayra86That 'thing' was bog standard for every crossfire and SLI capable GPU. Which meant most of the time you would clearly notice that you would actually run on one card, and if you didnt, it was clearly audible too because of all the extra heat and noise :)

Driver for Nvidia even lets me pick SFR or AFR. Not that it matters though; no game support = you are looking at something arcane that doesnt work or literally renders half the screen.
There were about 4 settings for Crossfire. I used it for the life of Total War from Medieval 2 to 3 Kingdoms, when they changed the engine. You could still enable Crossfire in the script, but it did nothing to the engine. At that point I started giving up on Multi GPU and starting focusing on other things to put in my other PCIe slots. I guess the heat that you are talking about is true if you don't have a case that is up to snuff but we are talking about 2 RX570/580 combos that might have pulled 150 Watts each. Plus they were inexpensive and popular.
Posted on Reply
#57
3valatzy
AnotherReaderTwo 8 pin connectors are used for the 7700 XT as well which isn't even a 250 W card.
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Posted on Reply
#58
kapone32
I have a 7600XT from As Rock. It is not a power hungry card but it comes with 2 8 pin connectors. I do believe that is going to be the standard for everyone else.
Posted on Reply
#59
3valatzy
kapone32I have a 7600XT from As Rock. It is not a power hungry card but it comes with 2 8 pin connectors. I do believe that is going to be the standard for everyone else.
RX 7600 is either single 6-pin or single 8-pin.
Posted on Reply
#60
wNotyarD
3valatzyThat is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Why is it a mistake? It doesn't make much difference in space utilization from an 8+6 pin combination. It makes keeping track of inventory and assembly way easier on them, not having to keep tabs on two different parts.
Posted on Reply
#61
AnotherReader
3valatzyThat is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Surely, not every AIB miscalculated. It's simple; 228 W would require 153 W from the 8 pin connector and 75 W from the PCIe slot. In practice, many GPUs draw miniscule amounts from the PCIe slot. Given how many people use third class PSUs, it's prudent to avoid more than 150 W from the 8 pin connector. Two 8 pin connectors make sense when you look at it from that perspective.
Posted on Reply
#62
3valatzy
wNotyarDWhy is it a mistake? It doesn't make much difference in space utilization from an 8+6 pin combination. It makes keeping track of inventory and assembly way easier on them, not having to keep tabs on two different parts.
It is a tough PSU requirement. I must be double 6-pin. Not all PSUs have those PCIe power connectors, which are ugly, space consuming, and can't be hidden inside the case.
Posted on Reply
#63
Onasi
@3valatzy
Pretty much every decent-ish PSU will come with at least two 6+2 pin PCI-e cables. Not really an issue.
I mean, if you want easy, simple, one solution to use on any card, well, 12V-2x6 is there to solve that, but I thought nobody liked it because it burns down your dog and kicks your house or something. Even though the revised connector is totally fine.
Posted on Reply
#64
Tomorrow
AnotherReaderGiven the rumoured specifications, 4080 performance is very unlikely. Going by the numbers in the latest GPU review, the 4080 is 42% faster than the 7800 XT at 1440p and 49% faster at 4K. That is too great a gap to be overcome by a 6.7% increase in Compute Units.
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
RT is harder to pin down as here AMD could reap the low hanging fruit and massively increase RT performance without increasing the number of RT cores (same number as CU's). Here i can actually believe 4080S level performance.
JustBenchingIf it's 45% faster in RT vs the 7900xtx, that makes it basically a 4080/4080S. Since the raster is also similar, then im calling it, 799$ MSRP.
Raster is not similar. Raster is ~4070 Ti Super level tho the reported specs dont support that.
AnotherReaderI agree. This is how the hype train starts rolling and then the inevitable derailment leads to bashing of the actual product, even if it's undeserved. The Compute Unit count and rumoured clock speeds suggest performance in the ballpark of the 7900 XT, not the 4080, and certainly not the 7900 XTX which is 20 % faster than the 7900 XT at 4K.
Glad someone gets it. Already i see people start making unrealistic claims. Lets temper our expectations.
AcEAMD never won against Nvidia since over 15 years. The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD.
They didn't and 290X was temporary?
You need to check you timeline and prices.

Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.

290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
Posted on Reply
#65
AnotherReader
TomorrowI even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
RT is harder to pin down as here AMD could reap the low hanging fruit and massively increase RT performance without increasing the number of RT cores (same number as CU's). Here i can actually believe 4080S level performance.

Raster is not similar. Raster is ~4070 Ti Super level tho the reported specs dont support that.

Glad someone gets it. Already i see people start making unrealistic claims. Lets temper our expectations.

They didn't and 290X was temporary?
You need to check you timeline and prices.

Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.

290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
Yes, matching 7900 XT's rasterization performance, in the absence of any increase in the performance of a single compute unit, would require high clocks: 3 GHz would be enough, but it's rather unlikely with a 220 W TDP. We know that RDNA 3.5 has doubled the number of texture samplers per compute unit and that may allow a greater than expected performance increase in some cases. In any case, at least the rumours about 7900 XTX level rasterization performance seem ridiculous. I'm also uncertain if they can match Nvidia for ray tracing performance after being behind for so long; the most likely case would be a big improvement over RDNA3, but a smaller gap to Ada.

As for the 290X, it was leading the 780 Ti in TPU's suite before the sun had set on 28 nm being the latest node for GPUs.
Posted on Reply
#66
HD64G
3valatzyThat is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
OC can go more than 300W on those GPUs, so this config is the best for safety of the current regulation.
Posted on Reply
Add your own comment
Dec 4th, 2024 14:08 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts