Monday, January 29th 2024

Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

We've known since way back in August 2023, that AMD is rumored to be retreating from the enthusiast graphics segment with its next-generation RDNA 4 graphics architecture, which means that we likely won't see successors to the RX 7900 series squaring off against the upper end of NVIDIA's fastest GeForce RTX "Blackwell" series. What we'll get instead is a product stack closely resembling that of the RX 5000 series RDNA, with its top part providing a highly competitive price-performance mix around the $400-mark. A more recent report by Moore's Law is Dead sheds more light on this part.

Apparently, the top Radeon RX SKU based on the next-gen RDNA4 graphics architecture will offer performance comparable to that of the current RX 7900 XTX, but at less than half its price (around the $400 mark). It is also expected to achieve this performance target using a smaller, simpler silicon, with significantly lower board cost, leading up to its price. What's more, there could be energy efficiency gains made from the switch to a newer 4 nm-class foundry node and the RDNA4 architecture itself; which could achieve its performance target using fewer numbers of compute units than the RX 7900 XTX with its 96.
When it came out, the RX 5700 XT offered an interesting performance proposition, beating the RTX 2070, and forcing NVIDIA to refresh its product stack with the RTX 20-series SUPER, and the resulting RTX 2070 SUPER. Things could go down slightly differently with RDNA4. Back in 2019, ray tracing was a novelty, and AMD could surprise NVIDIA in the performance segment even without it. There is no such advantage now, ray tracing is relevant; and so AMD could count on timing its launch before the Q4-2024 debut of the RTX 50-series "Blackwell."
Sources: Moore's Law is Dead (YouTube), Tweaktown
Add your own comment

292 Comments on Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

#251
Dr. Dro
kapone32I was talking about the 8700G and 8600G. What you are talking about is really a non issue as 3600s were not expensive when the 6500Xt launched. You could even use a 3100 and get 4.0 support. I bought the 6500XT at launch for $229, the time the 6600 was over $600. The main thing with the 6500XT is Freesync support on 4K TVs.
There's no point in pairing a 6500 XT with a 8700G imho, the entire point paying the premium for any of those is to use the advanced RDNA 3 iGPU, which has about the same level of performance as your average Navi 24 dGPU anyway

www.techpowerup.com/gpu-specs/radeon-780m.c4020

Even faster if you have tight and fast RAM
Posted on Reply
#252
kapone32
Dr. DroThere's no point in pairing a 6500 XT with a 8700G imho, the entire point paying the premium for any of those is to use the advanced RDNA 3 iGPU, which has about the same level of performance as your average Navi 24 dGPU anyway

www.techpowerup.com/gpu-specs/radeon-780m.c4020

Even faster if you have tight and fast RAM
I was talking about reviews of the 8700G and 8600G a lot of them like to compare the IGPU to the 6500Xt and you should check again. As it costs about 3 times the amount to build a AM5IGPU system than getting a 6500XT for your AM4 PC. Especially with fast RAM. RAM is one of the most expensive price/performance items in the PC space.
Posted on Reply
#253
Dr. Dro
kapone32I was talking about reviews of the 8700G and 8600G a lot of them like to compare the IGPU to the 6500Xt and you should check again. As it costs about 3 times the amount to build a AM5IGPU system than getting a 6500XT for your AM4 PC. Especially with fast RAM. RAM is one of the most expensive price/performance items in the PC space.
On that we'll agree, way more sensible
Posted on Reply
#254
Patriot
3valatzyCan't they be connected in something like hybrid crossfire to boost the overall graphics performance?
No, this is not 2005.

When MS released DX12 it killed all SLI/Crossfire for gaming.
There is 1-2 benchmark games that utilize it and that is it.
DX12 makes it so you can mix and match AMD and Nvidia gpus in a multi-gpu config...
but it also puts all of the effort to make it work on the game programmers.

SLI and Xfire worked because AMD and Nvidia supported it.
Posted on Reply
#255
3valatzy
PatriotNo, this is not 2005.
AMD acquired ATi in Q3 2006, and the Fusion (APUs) project execution began even later.
So, hydrid crossfire was impossible in 2005.
The first generation desktop and laptop APU, codenamed Llano, was announced on 4 January 2011
About Radeon RX 8700 XT.
Rumour specification state a 256-bit memory bus with old GDDR6 and using the old TSMC 5nm+ process (labeled 4N).
Performance around RTX 4070 Ti - RTX 4070Ti S.
Navi 48 ~ 300-350 mm2, 256-bit, GDDR6, 5nm, RTX 4070 Ti.

About Radeon RX 8600 XT.
Navi 44 ~ 200-210 mm2, 128-bit, GDDR6, 5nm, Radeon RX 7700 XT.
Navi 44 a new version of Navi 10, Navi 23, Navi 33.

www.notebookcheck.net/RDNA-4-Navi-48-and-Navi-44-GPUs-leak-with-details-of-performance-memory-spec-and-more.802603.0.html
www.digitaltrends.com/computing/amd-rdna-4-news-release-date-price-rumors/
Posted on Reply
#256
Patriot
3valatzyAMD acquired ATi in Q3 2006, and the Fusion (APUs) project execution began even later.
So, hydrid crossfire was impossible in 2005.
So I was hyperbolic, might your research have shown how many years hybrid crossfire has been dead?
Or is your pedantness limited to others?
Posted on Reply
#257
LabRat 891
Dr. DroThere's no point in pairing a 6500 XT with a 8700G imho, the entire point paying the premium for any of those is to use the advanced RDNA 3 iGPU, which has about the same level of performance as your average Navi 24 dGPU anyway

www.techpowerup.com/gpu-specs/radeon-780m.c4020

Even faster if you have tight and fast RAM
Agreed.
I actually considered returning my 5800X3D to try and score an AM5 APU. (Couldn't. Platform cost was too high)
Why? So I could remove the 6500XT I use for AFMFing my Vega 10.

To me, the Navi 24 is just a primitive less-featured version of what I'd expect out of an AM5 APU.
3valatzyAMD acquired ATi in Q3 2006, and the Fusion (APUs) project execution began even later.
So, hydrid crossfire was impossible in 2005.
Hmmm? "Crossfire" is dead.

HOWEVER there's more than just "AMD MGPU" to multi-GPU.
If InfintyFabric could be packetized over PCIe, "Hybrid Crossfire"-like functionality could exist again.
Posted on Reply
#258
Nhonho
I can't believe AMD has abandoned the high performance GPU market. AMD and other GPU developers are already using AI to design their GPUs, which improves GPU performance and greatly reduces the development time (and the cost) of GPU design. Same for CPUs.
Posted on Reply
#260
Patriot
LabRat 891Agreed.
I actually considered returning my 5800X3D to try and score an AM5 APU. (Couldn't. Platform cost was too high)
Why? So I could remove the 6500XT I use for AFMFing my Vega 10.

To me, the Navi 24 is just a primitive less-featured version of what I'd expect out of an AM5 APU.


Hmmm? "Crossfire" is dead.

HOWEVER there's more than just "AMD MGPU" to multi-GPU.
If InfintyFabric could be packetized over PCIe, "Hybrid Crossfire"-like functionality could exist again.
Sigh... Infinity fabric IS over pcie on the cpu side, and likely the accelerator side on the instincts on a ring bus nature.
No, this functionality is not coming back, unless DX12's successor changes who does the work.
DX12 supports multi gpu and even vendor mixing but the enablement is on the game dev side, so it doesn't happen.
DX11 Nvidia and AMD supported making multi gpu work, DX12 game devs do, and only the super sponsored titles and benchmarks ever had multi gpu support.

For all intents and purposes multi gpu for gaming is dead.
For workstations/servers we have NVlink and IF.
Posted on Reply
#262
Denver
ARFThey are coming.
Navi 48: 64 Compute Units, 256-bit GDDR6 693 GB/s, 240mm², PCIe 5.0 ~ RX 7800 XT
Navi 44: 32 Compute Units, 128-bit GDDR6 288 GB/s, 130mm², PCIe 5.0 ~ RX 7600


www.guru3d.com/story/more-amd-rdna-4-gpu-lineup-info-surfaces-navi-48-and-navi-44-details/
I'd hypothesize a potential 30% performance boost(raster) compared to the 7800XT, 2-3X better AI capabilities(low precision), plus; 40-50% increase in efficiency. These are just speculative estimates based on the currently available information* I brought it from the future to illustrate:



*Manufacturing process, die size, and clues regarding architectural alterations: Examining AMD’s RDNA 4 Changes in LLVM – Chips and Cheese
Posted on Reply
#263
ARF
DenverI'd hypothesize a potential 30% performance boost(raster) compared to the 7800XT
DenverThese are just speculative estimates based on the currently available information* I brought it from the future to illustrate:
*Manufacturing process, die size, and clues regarding architectural alterations: Examining AMD’s RDNA 4 Changes in LLVM – Chips and Cheese
The memory bandwidth will remain very low. How would they hide the 256-bit bus and improve the performance? The 7900 XT has 800 GB/s memory bandwidth, this is said to be below 700 GB/s?
Posted on Reply
#264
Denver
ARFThe memory bandwidth will remain very low. How would they hide the 256-bit bus and improve the performance? The 7900 XT has 800 GB/s memory bandwidth, this is said to be below 700 GB/s?
4080 -> Bandwidth 716.8 GB/s

Different architectures. Check 4080 performance and available bandwidth, for example.
The changes implemented by AMD will enhance processing efficiency and improve code management at runtime.
Posted on Reply
#265
ARF
Denver4080 -> Bandwidth 716.8 GB/s

Different architectures. Check 4080 performance and available bandwidth, for example.
The changes implemented by AMD will enhance processing efficiency and improve code management at runtime.
The 4080 is far inferior. Look at the theoretical numbers:
RX 7900 XT 20GB: Pixel Rate 459.6 GPixel/s; Texture Rate 804.4 GTexel/s; Bandwidth 800.0 GB/s; L0 Cache 64 KB per WGP; L1 Cache 256 KB per Array; L2 Cache 6 MB; L3 Cache 80 MB
RTX 4080 16GB: Pixel Rate 280.6 GPixel/s; Texture Rate 761.5 GTexel/s; Bandwidth 716.8 GB/s; L1 Cache 128 KB (per SM); L2 Cache 64 MB

The thing that AMD must do is to copy nvidia and enable something like Radeon Boost as the default review setting :D

Posted on Reply
#266
Dr. Dro
ARFThe 4080 is far inferior. Look at the theoretical numbers:
RX 7900 XT 20GB: Pixel Rate 459.6 GPixel/s; Texture Rate 804.4 GTexel/s; Bandwidth 800.0 GB/s; L0 Cache 64 KB per WGP; L1 Cache 256 KB per Array; L2 Cache 6 MB; L3 Cache 80 MB
RTX 4080 16GB: Pixel Rate 280.6 GPixel/s; Texture Rate 761.5 GTexel/s; Bandwidth 716.8 GB/s; L1 Cache 128 KB (per SM); L2 Cache 64 MB

The thing that AMD must do is to copy nvidia and enable something like Radeon Boost as the default review setting :D

The 4080 is like 5% behind in raster and 20% ahead in RT compared to the 7900 XTX mate. It eats the 7900 XT for lunch, and greatly outperforms both the GRE and 7800 XT as well, all while consuming less power. The 4080 Super bridged the raster gap and widened the RT gap ever so slightly. It's no contest.

The theoreticals don't mean anything because RDNA 3 has obvious scaling problems, it performs far worse than it should.
Posted on Reply
#267
kapone32
Dr. DroThe 4080 is like 5% behind in raster and 20% ahead in RT compared to the 7900 XTX mate. It eats the 7900 XT for lunch, and greatly outperforms both the GRE and 7800 XT as well, all while consuming less power. The 4080 Super bridged the raster gap and widened the RT gap ever so slightly. It's no contest.

The theoreticals don't mean anything because RDNA 3 has obvious scaling problems, it performs far worse than it should.
In your opinion
Posted on Reply
#268
Dr. Dro
kapone32In your opinion
No, Kapone. In provable reality. I'll even use 4K as a measure which should help the 7900 XTX due to its much higher memory bandwidth





Also slightly OT but since you're going to be predictably defending your purchasing choices again, Hardware Unboxed just dropped this video proving what we were telling you all along:


Posted on Reply
#269
kapone32
Dr. DroNo, Kapone. In provable reality. I'll even use 4K as a measure which should help the 7900 XTX due to its much higher memory bandwidth





Also slightly OT but since you're going to be predictably defending your purchasing choices again, Hardware Unboxed just dropped this video proving what we were telling you all along:


I hear you. Until you go to the store and see the 6800XT for 1/3 the price and save your money. If you are bold you will save $600 and get the 7900XT and not notice the difference. but have 4 more GB of VRAM. Price is still the mitigating factor in GPU purchasing for the masses.
Posted on Reply
#270
Dr. Dro
kapone32I hear you. Until you go to the store and see the 6800XT for 1/3 the price and save your money. If you are bold you will save $600 and get the 7900XT and not notice the difference. but have 4 more GB of VRAM. Price is still the mitigating factor in GPU purchasing for the masses.
Completely beside the point, if we're talking VRAM an used 3090 is just as cheap and will pound the 6900 XT anyway
Posted on Reply
#272
Bagerklestyne
Struggling to find value in the product stack, barely moving forward in 2 generations (to the point of actually going backwards potentially.)

They'll need to crush the price to performance and make it markedly more efficient to even tempt consumers I think.
Posted on Reply
#273
Chrispy_
BagerklestyneThey'll need to crush the price to performance and make it markedly more efficient to even tempt consumers I think.
If the rumours of a 30% price/peformance shift in 2024 are accurate, then that will be the case.

Something around the performance of a 7900GRE for 30% less is very promising. I'd sure be happy with a $€£ 399 GRE. It's going to handle 1440p high-refresh and probably 4K60 which is arguably something that price point has never attained.

I enabled Hyper-RX for the first time on my 7800XT last night and gave CP2077 another stab at path-tracing. I don't think it's usable (60-70fps with FMF is more like 30fps in terms of input lag) and the reflections in motion at that framerate are pretty noisy, just as they are on my RTX system, but the point is that AMD's tech of Boost, FMF, Anti-lag - all combined in one easy to set tickbox in the driver was a pretty seamless experience that made path tracing a better experience on the £500 7800XT than it is on the £500 4060Ti 16GB. (yes, I know the 4060Ti 16GB has since had a price cut, but it's the closest comparison point to my £480 7800XT from Nvidia right now.)

The only thing left is for CDPR to update FSR in CP2077 to a newer, better version because FSR2.1 suffers with ghosting behind vehicles pretty badly.
Posted on Reply
#274
Dr. Dro
BagerklestyneStruggling to find value in the product stack, barely moving forward in 2 generations (to the point of actually going backwards potentially.)

They'll need to crush the price to performance and make it markedly more efficient to even tempt consumers I think.
It has to be substantially cheaper and basically slot power to be a successful card in my eyes. Midrange performance from half a decade ago is an abject failure unless it's low footprint.
Posted on Reply
#275
AusWolf
Dr. DroIt has to be substantially cheaper and basically slot power to be a successful card in my eyes. Midrange performance from half a decade ago is an abject failure unless it's low footprint.
Midrange had 40 CUs half a decade ago. Now, there's 60, and soon 64. I don't see any need for "slot power". As long as AMD can keep the consumption at the current 250-ish W mark while offering a performance upgrade, it'll be fine.

I only want them to fix the video playback power consumption. My 7800 XT eats more playing a film than my entire bedroom HTPC while playing a game, which is ridiculous.
Posted on Reply
Add your own comment
May 29th, 2024 04:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts