• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

Joined
Dec 16, 2017
Messages
2,925 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
It's about a political decision to enable it.
No, it's about a number of costs items that need to be considered, that are simply not worth it when the market for multi-GPU is probably less than 1% of the total market.


I mean we keep saying RDNA3 is all meh, but realistically, it performs admirably,
"There are no bad products. Just bad prices."
 
Last edited:
Joined
Oct 29, 2019
Messages
467 (0.25/day)
The last 10 years, every time a new GPU arch from AMD launch we get those wild overhyped rumor about a crazy good product, just to get something ranging from just "okay" at worse to somewhat good at best, but the problem is even when it's the later we get (somewhat good), everyone is massively disappointed and the product is labelled as a complete failure.

I'll say let's temper expectations, we are only one month away to actually know the product. So maybe this time, we can judge it to its real value.
Agreed. I remember and year and a half ago there was all these rumors of the 7700XT offering 6900XT level performance for $350-400.

Card ended up being close to $500 at launch and could barely beat the 6800 non XT. 7700XT ended up having more power consumption and less Vram than the 6800 as well........
 
Joined
Jun 11, 2017
Messages
280 (0.10/day)
Location
Montreal Canada
220 watts nice, Next Nvidia card 6000 series will be using 2200 watts and will need your own power transformer outside to run it.
 
Joined
Jun 2, 2017
Messages
9,307 (3.38/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
No it wasnt. Drivers didnt stop mattering just because polaris existed. By the time of polaris crossfire was a nearly dead technology.

The problem with crossfire (and SLI) was never the interconnects or experience. It was, and always will be, driver support, which fell on AMD/nvidia and was an absolute royal pain to fix for the small marketshare they had. With DX11, the traditional methods of multi GPU became nearly impossible to implement.

DX12 has long supported multiGPU. It is on game developers now to enable and support it. Nothing "political" about it, game devs dont see the value for the niche base that wants it. It's not on AMD to enable that.
Well that is your opinion. I enjoyed Crossfire support so much that most of the Games I bought at that time supported Crossfire. Mutli GPU is not the same thing as crossfire and has no impact on Games. Ashes of the Singularity is the only Game I know that just supports Multi GPU native. The thing with Polaris was that Crossfire was at the Driver level so if the Game supported it it worked and if not the other card would basically be turned off.
 
Joined
May 29, 2017
Messages
354 (0.13/day)
Location
Latvia
Processor AMD Ryzen™ 7 5700X
Motherboard ASRock B450M Pro4-F R2.0
Cooling Arctic Freezer A35
Memory Lexar Thor 32GB 3733Mhz CL16
Video Card(s) PURE AMD Radeon™ RX 7800 XT 16GB
Storage Lexar NM790 2TB + Lexar NS100 2TB
Display(s) HP X34 UltraWide IPS 165Hz
Case Zalman i3 Neo + Arctic P12
Audio Device(s) Airpulse A100 + Edifier T5
Power Supply Sharkoon Rebel P20 750W
Mouse Cooler Master MM730
Keyboard Krux Atax PRO Gateron Yellow
Software Windows 11 Pro
RTX 4080 performance is better than i thought. But still it's just a rumor.
That mid range gpu should cost no more than 500-550€. At 700€ it will not sell in big numbers and market share will not improve! Strike while iron is red hot like 8800GT did in just one year time period after
8800GTX launch.
stalker_1600_1200.png
 
Last edited:
Joined
Dec 6, 2022
Messages
415 (0.56/day)
Location
NYC
System Name GameStation
Processor AMD R5 5600X
Motherboard Gigabyte B550
Cooling Artic Freezer II 120
Memory 16 GB
Video Card(s) Sapphire Pulse 7900 XTX
Storage 2 TB SSD
Case Cooler Master Elite 120
I mean we keep saying RDNA3 is all meh, but realistically, it performs admirably,
Actually, i dont think that we have seen what the architecture can really do, since i dont think that anyone has released a game that truly utilizes the hardware.

For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
 
Joined
Nov 26, 2021
Messages
1,699 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Actually, i dont think that we have seen what the architecture can really do, since i dont think that anyone has released a game that truly utilizes the hardware.

For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
That isn't on the games. AMD's compiler is supposed to handle that, but compilers are far from perfect. Game ready drivers with hand optimized code would be the way to handle this. Dual issue also has some restrictions; for example, it only applies to instructions with two sources and one destination. Consequently, the common FMA (fused multiply add) operation is excluded. Other restrictions are listed in the Chips and Cheese article on RDNA 3.

Dual issue opportunities are further limited by available execution units, data dependencies, and register file bandwidth. Operands in the same position can’t read from the same register bank. Another limitation applies to the destination registers, which can’t be both even or both odd.
 

AcE

Joined
Dec 3, 2024
Messages
117 (13.00/day)
Actually, i dont think that we have seen what the architecture can really do, since i dont think that anyone has released a game that truly utilizes the hardware.

For example, its supposed to do 2 instructions per clock , but as said, i dont think that was ever exploited.
A few games like Starfield did, absolute AMD games and the performance then was ridiculous compared to even 4090. But those are only a few games i can count them on a hand, other example would be Avatar I think.

I think this will be in ballpark of 7900 XT with RT performance comparable to 4080 or a bit lower, but we will see soon. Pricing I expect 500-600 not more.
 
Joined
May 29, 2017
Messages
354 (0.13/day)
Location
Latvia
Processor AMD Ryzen™ 7 5700X
Motherboard ASRock B450M Pro4-F R2.0
Cooling Arctic Freezer A35
Memory Lexar Thor 32GB 3733Mhz CL16
Video Card(s) PURE AMD Radeon™ RX 7800 XT 16GB
Storage Lexar NM790 2TB + Lexar NS100 2TB
Display(s) HP X34 UltraWide IPS 165Hz
Case Zalman i3 Neo + Arctic P12
Audio Device(s) Airpulse A100 + Edifier T5
Power Supply Sharkoon Rebel P20 750W
Mouse Cooler Master MM730
Keyboard Krux Atax PRO Gateron Yellow
Software Windows 11 Pro
Given the rumoured specifications, 4080 performance is very unlikely. Going by the numbers in the latest GPU review, the 4080 is 42% faster than the 7800 XT at 1440p and 49% faster at 4K. That is too great a gap to be overcome by a 6.7% increase in Compute Units.
It's very unlikely but not impossible!

GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster if not even more if compare to its predecessor 7800 GT (hard to find actual information in direct comparison)
 
Last edited:
Joined
Nov 26, 2021
Messages
1,699 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
It's very unlikely but not impossible!

GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster or even more if compared to 7800 GT
In all of these cases, the faster successor used a more advanced node than its predecessor. These are the nodes:

GTX 970 and 980 Ti: 28 nm, GTX 1070 and 1080 Ti: 16 nm (first TSMC finfet node)
7800 GT: 110 nm, 8800 GT: 65 nm

RDNA 4 doesn't have the luxury of a smaller node than its predecessor.
 
Joined
Nov 22, 2023
Messages
222 (0.58/day)
It's very unlikely but not impossible!

GTX 1070 was 62% faster than GTX 970
GTX 1080Ti was 76% faster than GTX 980Ti
8800 GT was something like ~ 100% faster if not even more if compare to its predecessor 7800 GT (hard to find actual information in direct comparison)

- Thing is those were all die shrinks (1070/1080Ti was actually a double shrink) back when that really meant something.

The N4P process the 8800XT is using is just an space/power optimized N5 process that N31 and N21's GCDs used. It'll help a little bit, like 10% additional performance will come from the better process, but it's not going to work miracles by a longshot.
 
Joined
Sep 10, 2018
Messages
6,944 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
In all of these cases, the faster successor used a more advanced node than its predecessor. These are the nodes:

GTX 970 and 980 Ti: 28 nm, GTX 1070 and 1080 Ti: 16 nm (first TSMC finfet node)
7800 GT: 110 nm, 8800 GT: 65 nm

RDNA 4 doesn't have the luxury of a smaller node than its predecessor.

680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
 

AcE

Joined
Dec 3, 2024
Messages
117 (13.00/day)
680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
680 to 980 was a bigger chip and new arch, so not that special. AMD did something comparable with Radeon VII to 5700 XT, smaller chip with comparable performance and better efficiency. Also HD 7970 to 290X was a decent jump on same node with a minor increase in size, same as with 680 to 980. They all just cook with water.
 
Joined
Nov 22, 2023
Messages
222 (0.58/day)
680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.

- AMD got some massive gains going from RDNA (7nm) to RDNA2 (7nm). N10 (5700XT) was 250mm2 while N23 (6700XT) was 270mm2 and ~40% faster.

It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
 
Joined
Jan 27, 2024
Messages
260 (0.81/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
- AMD got some massive gains going from RDNA (7nm) to RDNA2 (7nm). N10 (5700XT) was 250mm2 while N23 (6700XT) was 270mm2 and ~40% faster.

It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.

Radeon RX 6700 XT is Navi 22 and is 335 mm2. So, compared to Navi 10 - 35% faster for 34% larger die area, and 67% more transistors. 12 GB vs 8 GB, too..
 
Joined
Dec 30, 2010
Messages
2,199 (0.43/day)
- AMD got some massive gains going from RDNA (7nm) to RDNA2 (7nm). N10 (5700XT) was 250mm2 while N23 (6700XT) was 270mm2 and ~40% faster.

It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.

Chiplets is just not ready, unless they find a way to tackle latency's.

Nvidia is going huge with their die sizes, and high cost per wafer, AMD on the other hand is making chiplets which have a less high failure rate.

I still stick to my 6700XT. It's one of the few generations that has not been gimped out by Morepowertools (from 180W to 250W).
 
Joined
Jul 31, 2024
Messages
163 (1.21/day)
Processor AMD Ryzen 7 5700X
Motherboard ASUS ROG Strix B550-F Gaming Wifi II
Cooling Noctua NH-U12S Redux
Memory 4x8G Teamgroup Vulcan Z DDR4; 3600MHz @ CL18
Video Card(s) MSI Ventus 2X GeForce RTX 3060 12GB
Storage WD_Black SN770, Leven JPS600, Toshiba DT01ACA
Display(s) Samsung ViewFinity S6
Case Fractal Design Pop Air TG
Power Supply Corsair CX750M
Mouse Corsair Harpoon RGB
Keyboard Keychron C2 Pro
VR HMD Valve Index
Smeh... I doubt there's anything in this product stack that would catch my attention. I'd probably be salivating over the blazing-hot sales of RX 7000 when these drop. Can you imagine an RX 7800XT down $75? Because that sounds like a daydream to me.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,203 (1.28/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
I don't think the product will get trashed if the rumours don't all add up to miraculous levels, all AMD need to do is reasonably deliver on the day. They can land another banger, they've done it before and then can do it again, especially since virtually any shortcomings can be forgiven with sharp pricing.

Product launches with XYZ performance and spec characteristics, and a given price. Then, provided there are no straight up bugs or issues, it will be praised, meh'd or trashed based on that. Real tangible metrics, not weighted against lofty rumors.

The exception to this is if the company itself misleads consumers as to expected performance/price.

Some people take it way too personally when a product from their favourite company isn't met with universal praise, when the reality is the vast majority of how the product is perceived was up to said company to get right. And, they need to get it right on day 1, not with price cuts or bug fixes (for example) weeks to months later, the damage is done at launch.
 
Joined
Sep 10, 2018
Messages
6,944 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I don't think the product will get trashed if the rumours don't all add up to miraculous levels, all AMD need to do is reasonably deliver on the day. They can land another banger, they've done it before and then can do it again, especially since virtually any shortcomings can be forgiven with sharp pricing.

Product launches with XYZ performance and spec characteristics, and a given price. Then, provided there are no straight up bugs or issues, it will be praised, meh'd or trashed based on that. Real tangible metrics, not weighted against lofty rumors.

The exception to this is if the company itself misleads consumers as to expected performance/price.

Some people take it way too personally when a product from their favourite company isn't met with universal praise, when the reality is the vast majority of how the product is perceived was up to said company to get right. And, they need to get it right on day 1, not with price cuts or bug fixes (for example) weeks to months later, the damage is done at launch.

While I agree in my own personal view of a product. AMD fanboys will get super hyped over unrealistic rumors over and over again and then AMD will just do what they do offer a slightly inferior product at a discount vs whatever Nvidia offers in the price range.

I doubt we will see another 4000 series situation from them the last time they offered a killer product at a killer price. Now will this drop like a rock at retail and eventually be a solid buy sure.
 
Joined
Oct 12, 2005
Messages
709 (0.10/day)
Chiplets is just not ready, unless they find a way to tackle latency's.

Nvidia is going huge with their die sizes, and high cost per wafer, AMD on the other hand is making chiplets which have a less high failure rate.

I still stick to my 6700XT. It's one of the few generations that has not been gimped out by Morepowertools (from 180W to 250W).
GPU aren't really sensitive about latency.

The situation is a bit different than on CPU side.

The two main thing that killed RDNA3 is :

  • Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
  • And most importantly. chiplets are great when they give you a competitive advantage on cost. Unlike CPU, they can't sell RDNA3 dies to datacenter market since that spot is taken by CDNA. The added complexity also increase cost meaning unless you want to reduce greatly your margin, you have have to price those higher.

If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.

The benefits of doing chiplets was to deliver more silicon at a lower cost. Well 4090 is 76.3 Millions transistors with a die size of 609mm2 were the 7900XTX has a total of 57.7 millions transistors with a total die size of 529 mm2.

On that, the main die, the GCD is only 304 mm2 and 45 millions transistors.

The right opponent of the 7900 XTX is the 4080 at 45.9 millions transistors. About the same for the main die, and you add those much cheaper MCD on the side. If AMD went all out with a 500 mm2 GCD die, things could have really been different.

Nvidia went all out. AMD didn't and that is why they lost that generation. The main advantages was that the 4090 dies could be sold also to Datacenter and AI. AMD was only focussing on gaming. It's now obvious but they were set to lose that generation from the start.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
42,458 (6.66/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
No it wasnt. Drivers didnt stop mattering just because polaris existed. By the time of polaris crossfire was a nearly dead technology.

The problem with crossfire (and SLI) was never the interconnects or experience. It was, and always will be, driver support, which fell on AMD/nvidia and was an absolute royal pain to fix for the small marketshare they had. With DX11, the traditional methods of multi GPU became nearly impossible to implement.

DX12 has long supported multiGPU. It is on game developers now to enable and support it. Nothing "political" about it, game devs dont see the value for the niche base that wants it. It's not on AMD to enable that.
Yup because most game devs suffer consolitis which are an igp/apu on a single mainboard with a cpu like most mobiles today but with dedicated memory, not a dgpu.

I remember when nfsu came out, it was same graphics quality between both the gf fx 5200 and the XBox, if you had a Radeon 9700 Pro, you could max that game out graphically and it was friggin beautiful and played excellent on pc.
 

AcE

Joined
Dec 3, 2024
Messages
117 (13.00/day)
It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
You only have so many Maxwell moments, and it all just happened because both companies used suboptimal architectures and that won't happen again because they learn. With Nvidia it was Kepler, which was beaten by GCN2 later, and with AMD it was GCN2/3 which was beaten by Maxwell and was too inefficient and blown up, in general GCN was, all versions performed suboptimally unless you used low level API (DX12/VK) or Asynchronous Compute, which both led to the huge engine being used properly, especially true with Fury X. It was either this or use very high resolutions, for Fury X it was 4K for example, way too many shaders and a suboptimal driver for DX11 which had issues filling it.
Chiplets is just not ready, unless they find a way to tackle latency's.
The latency cost it performance, but the main issue was that Nvidia is just too rich too good, basically AMD had a 4080 with 384 Bit instead of 256 bit and other superfluous parts vs a huge 4090 chip which AMD could never compete with, way more transistors, and if you calculate that 5/6nm Chiplet mix into 5nm pure, it's just about 450-480mm² vs 600mm² ADA102, so no chance competing with a smaller GPU. Exactly that size also was missing in performance, about 20-30%. No surprises and no dark magic here, Nvidia is not doing anything special, just investing more money. It's the upside of concentrating on 1 product, GPU and not just doing it on the side like AMD does, their main business is still CPU. GPUs are only very good at AMD in Datacenter, Instinct, not in consumer stuff. But they are trying to consolidate that with UDNA, just like Nvidia does since a long time, at least Volta times.
The two main thing that killed RDNA3 is :

  • Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
Increased power usage doesnt matter, I already said it could not compete because it was smaller. It competed well with the 4080 and that's it, but too expensive to produce for the price. Remember, 4080 way smaller chip with smaller bus vs the 7900 XTX which was clearly bigger and more expensive bus config, for MORE money - it didn't sell well, the 4080, but it still sold better than XTX.

The efficiency of RDNA 3 was still good, so that was not the issue. Yes Nvidias efficiency was naturally better with pure 5nm vs 5/6nm mix, but not far off.

If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.
They will never be, AMD is a mixed processor company and Nvidia is purely GPU (nearly, aside from the few small ARM cpus they make), so ofc Nvidia will do all-in whereas AMD will always be more concentrated on multiple things and more on their traditional CPU business. Ryzen is in fact the GeForce of CPUs and has the same (toxic) mind share at times.
Nvidia went all out. AMD didn't and that is why they lost that generation.
AMD never won against Nvidia since over 15 years, and back then in HD 5000 times it only happened because GTX 400 was a hot and loud disaster. Funny enough that was a mid size chip with new node beating a huge chip of Nvidia, and the older huge chips of Nvidia on a older node (GTX 200 and 400). The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD. So in other words, the GPU branch was still named "ATI" when AMD had a W against Nvidia, and the HD 5850/5870 sold out as well.
 
Last edited:
Joined
Nov 11, 2016
Messages
3,431 (1.16/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
GPU aren't really sensitive about latency.

The situation is a bit different than on CPU side.

The two main thing that killed RDNA3 is :

  • Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
  • And most importantly. chiplets are great when they give you a competitive advantage on cost. Unlike CPU, they can't sell RDNA3 dies to datacenter market since that spot is taken by CDNA. The added complexity also increase cost meaning unless you want to reduce greatly your margin, you have have to price those higher.

If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.

The benefits of doing chiplets was to deliver more silicon at a lower cost. Well 4090 is 76.3 Millions transistors with a die size of 609mm2 were the 7900XTX has a total of 57.7 millions transistors with a total die size of 529 mm2.

On that, the main die, the GCD is only 304 mm2 and 45 millions transistors.

The right opponent of the 7900 XTX is the 4080 at 45.9 millions transistors. About the same for the main die, and you add those much cheaper MCD on the side. If AMD went all out with a 500 mm2 GCD die, things could have really been different.

Nvidia went all out. AMD didn't and that is why they lost that generation. The main advantages was that the 4090 dies could be sold also to Datacenter and AI. AMD was only focussing on gaming. It's now obvious but they were set to lose that generation from the start.

You meant Nvidia went all out by reserving the fully enabled AD102 chip (with 33% more L2 cache than 4090) for the 7000usd Quadro GPU? :roll: .

AMD could went all out with 500mm2 GCD and performance would barely change since they hit a bandwidth ceiling (they would need 512bit bus, which would make things more complicated). If it were as easy as making bigger GCD then AMD would have done so within these past 2 years instead of abandoning high end and go for mainstream segment with 8800 XT
 

AcE

Joined
Dec 3, 2024
Messages
117 (13.00/day)
AMD could went all out with 500mm2 GCD and performance would barely change since they hit a bandwidth ceiling (they would need 512bit bus, which would make things more complicated). If it were as easy as making bigger GCD then AMD would have done so within these past 2 years instead of abandoning high end and go for mainstream segment with 8800 XT
Actually not even needed, they could've increased L2 Cache sizes as well and not used 512 bit, but AMD always stops at about 500-550mm² (only Fury X was an exception), so this was never in the books. The only possible thing was going full monolithic, then you have a bit more shaders and better latency because you can avoid the interconnect downsides but that's probably still not enough to match 4090 let alone the full chip which Nvidia would have 100% released if AMD were too strong.
 
Joined
Nov 26, 2021
Messages
1,699 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
680 to the 980 was a nice jump and they used a similar node though so it is possible the problem is that was nvidia I don't really think AMD can pull of the same gains on a similar node.
Both Maxwell, and to a lesser extent, the much derided Turing were excellent jumps on the same node. AMD has also done so in the past with the HD4000 series and RDNA 2. Even Fury X and the 290X saw fairly significant gains on their predecessors without changing the node.
 
Top