• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 8800 XT Reportedly Features 220 W TDP, RDNA 4 Efficiency

Joined
Oct 24, 2022
Messages
226 (0.29/day)
Radeon RX 8000 series GPUs

Can the TPU team review the image quality of videos encoded in AV1 with the new VGAs from Intel, AMD and Nvidia? Please...

Is it possible to encode videos in 2 passes using the GPU? If so, which app does it?
 
Joined
Sep 17, 2014
Messages
22,569 (6.03/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Well that is your opinion. I enjoyed Crossfire support so much that most of the Games I bought at that time supported Crossfire. Mutli GPU is not the same thing as crossfire and has no impact on Games. Ashes of the Singularity is the only Game I know that just supports Multi GPU native. The thing with Polaris was that Crossfire was at the Driver level so if the Game supported it it worked and if not the other card would basically be turned off.
That 'thing' was bog standard for every crossfire and SLI capable GPU. Which meant most of the time you would clearly notice that you would actually run on one card, and if you didnt, it was clearly audible too because of all the extra heat and noise :)

Driver for Nvidia even lets me pick SFR or AFR. Not that it matters though; no game support = you are looking at something arcane that doesnt work or literally renders half the screen.
 
Joined
Oct 5, 2018
Messages
29 (0.01/day)
Processor Ryzen 7 5800X3D
Motherboard Gigabyte B550 AORUS ELITE V2 rev 1.2
Cooling be quiet! Silent Loop 2
Memory 2x16GB 3200 A-DATA DDR4
Video Card(s) Asus 6700 XT
Storage OCZ Vertex 4 120GB + Samsung 2TB 980 Pro
Display(s) Asus ROG Strix XG309CM
Case be quiet! Silent Base 601
Power Supply be quiet! Straight Power 11 750W (BN307)
Mouse Logitech G403 Hero
Keyboard Logitech G710+
All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
 
Joined
Jan 27, 2024
Messages
260 (0.81/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
Can the TPU team review the image quality of videos encoded in AV1 with the new VGAs from Intel, AMD and Nvidia? Please...

Is it possible to encode videos in 2 passes using the GPU? If so, which app does it?

That, but also a standard image quality testing in order to see which brand of cards cheats with the textures and which does not (yeah, am looking at you, Nvidia :D)


All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.

It is about smart engineering and AI. I know there is no smart engineering at AMD, but maybe it will be their first time to implement it.
It's called undervolting, it is pretty simple and straight-forward, can be easily done at the factory. The trade-off - you lose 2% of performance, but your cards get lowered TDP from 300W to some sane 180W..
 
Joined
Nov 26, 2021
Messages
1,698 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
All rumors about amd gpu cards never goes live, so this vision is more like dream card rather than real one. Even rumor about double 8-pin doesn't fit to this vision which can end up with gpu with TGP around 300W... better wait instead of overhyping this gpu like many previous miracle AMD gpus which never goes live at the end.
Two 8 pin connectors are used for the 7700 XT as well which isn't even a 250 W card.
 
Joined
Jun 2, 2017
Messages
9,307 (3.38/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
That 'thing' was bog standard for every crossfire and SLI capable GPU. Which meant most of the time you would clearly notice that you would actually run on one card, and if you didnt, it was clearly audible too because of all the extra heat and noise :)

Driver for Nvidia even lets me pick SFR or AFR. Not that it matters though; no game support = you are looking at something arcane that doesnt work or literally renders half the screen.
There were about 4 settings for Crossfire. I used it for the life of Total War from Medieval 2 to 3 Kingdoms, when they changed the engine. You could still enable Crossfire in the script, but it did nothing to the engine. At that point I started giving up on Multi GPU and starting focusing on other things to put in my other PCIe slots. I guess the heat that you are talking about is true if you don't have a case that is up to snuff but we are talking about 2 RX570/580 combos that might have pulled 150 Watts each. Plus they were inexpensive and popular.
 
Joined
Jan 27, 2024
Messages
260 (0.81/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
Joined
Jun 2, 2017
Messages
9,307 (3.38/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
I have a 7600XT from As Rock. It is not a power hungry card but it comes with 2 8 pin connectors. I do believe that is going to be the standard for everyone else.
 
Joined
Jan 27, 2024
Messages
260 (0.81/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
I have a 7600XT from As Rock. It is not a power hungry card but it comes with 2 8 pin connectors. I do believe that is going to be the standard for everyone else.

RX 7600 is either single 6-pin or single 8-pin.
 
Joined
Sep 26, 2022
Messages
2,113 (2.62/day)
Location
Brazil
System Name G-Station 2.0 "YGUAZU"
Processor AMD Ryzen 7 5700X3D
Motherboard Gigabyte X470 Aorus Gaming 7 WiFi
Cooling Freezemod: Pump, Reservoir, 360mm Radiator, Fittings / Bykski: Blocks / Barrow: Meters
Memory Asgard Bragi DDR4-3600CL14 2x16GB
Video Card(s) Sapphire PULSE RX 7900 XTX
Storage 240GB Samsung 840 Evo, 1TB Asgard AN2, 2TB Hiksemi FUTURE-LITE, 320GB+1TB 7200RPM HDD
Display(s) Samsung 34" Odyssey OLED G8
Case Lian Li Lancool 216
Audio Device(s) Astro A40 TR + MixAmp
Power Supply Cougar GEX X2 1000W
Mouse Razer Viper Ultimate
Keyboard Razer Huntsman Elite (Red)
Software Windows 11 Pro
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Why is it a mistake? It doesn't make much difference in space utilization from an 8+6 pin combination. It makes keeping track of inventory and assembly way easier on them, not having to keep tabs on two different parts.
 
Joined
Nov 26, 2021
Messages
1,698 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
Surely, not every AIB miscalculated. It's simple; 228 W would require 153 W from the 8 pin connector and 75 W from the PCIe slot. In practice, many GPUs draw miniscule amounts from the PCIe slot. Given how many people use third class PSUs, it's prudent to avoid more than 150 W from the 8 pin connector. Two 8 pin connectors make sense when you look at it from that perspective.
 
Joined
Jan 27, 2024
Messages
260 (0.81/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
Why is it a mistake? It doesn't make much difference in space utilization from an 8+6 pin combination. It makes keeping track of inventory and assembly way easier on them, not having to keep tabs on two different parts.

It is a tough PSU requirement. I must be double 6-pin. Not all PSUs have those PCIe power connectors, which are ugly, space consuming, and can't be hidden inside the case.
 
Joined
Nov 27, 2023
Messages
2,452 (6.44/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent (Solid)
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original) on a X-Raypad Equate Plus V2
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (24H2)
@3valatzy
Pretty much every decent-ish PSU will come with at least two 6+2 pin PCI-e cables. Not really an issue.
I mean, if you want easy, simple, one solution to use on any card, well, 12V-2x6 is there to solve that, but I thought nobody liked it because it burns down your dog and kicks your house or something. Even though the revised connector is totally fine.
 
Joined
Aug 21, 2013
Messages
1,921 (0.46/day)
Given the rumoured specifications, 4080 performance is very unlikely. Going by the numbers in the latest GPU review, the 4080 is 42% faster than the 7800 XT at 1440p and 49% faster at 4K. That is too great a gap to be overcome by a 6.7% increase in Compute Units.
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
RT is harder to pin down as here AMD could reap the low hanging fruit and massively increase RT performance without increasing the number of RT cores (same number as CU's). Here i can actually believe 4080S level performance.
If it's 45% faster in RT vs the 7900xtx, that makes it basically a 4080/4080S. Since the raster is also similar, then im calling it, 799$ MSRP.
Raster is not similar. Raster is ~4070 Ti Super level tho the reported specs dont support that.
I agree. This is how the hype train starts rolling and then the inevitable derailment leads to bashing of the actual product, even if it's undeserved. The Compute Unit count and rumoured clock speeds suggest performance in the ballpark of the 7900 XT, not the 4080, and certainly not the 7900 XTX which is 20 % faster than the 7900 XT at 4K.
Glad someone gets it. Already i see people start making unrealistic claims. Lets temper our expectations.
AMD never won against Nvidia since over 15 years. The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD.
They didn't and 290X was temporary?
You need to check you timeline and prices.

Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.

290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
 
Joined
Nov 26, 2021
Messages
1,698 (1.53/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
RT is harder to pin down as here AMD could reap the low hanging fruit and massively increase RT performance without increasing the number of RT cores (same number as CU's). Here i can actually believe 4080S level performance.

Raster is not similar. Raster is ~4070 Ti Super level tho the reported specs dont support that.

Glad someone gets it. Already i see people start making unrealistic claims. Lets temper our expectations.

They didn't and 290X was temporary?
You need to check you timeline and prices.

Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.

290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
Yes, matching 7900 XT's rasterization performance, in the absence of any increase in the performance of a single compute unit, would require high clocks: 3 GHz would be enough, but it's rather unlikely with a 220 W TDP. We know that RDNA 3.5 has doubled the number of texture samplers per compute unit and that may allow a greater than expected performance increase in some cases. In any case, at least the rumours about 7900 XTX level rasterization performance seem ridiculous. I'm also uncertain if they can match Nvidia for ray tracing performance after being behind for so long; the most likely case would be a big improvement over RDNA3, but a smaller gap to Ada.

As for the 290X, it was leading the 780 Ti in TPU's suite before the sun had set on 28 nm being the latest node for GPUs.
 
Last edited:
Joined
Apr 30, 2011
Messages
2,711 (0.54/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
That is a design mistake. Because you can't put 375-watt connectors (2 x 150-watt + 75-watt from the PCIe slot) on a 245-watt card.
OC can go more than 300W on those GPUs, so this config is the best for safety of the current regulation.
 

AcE

Joined
Dec 3, 2024
Messages
116 (12.89/day)
I even have doubts if it can reach 4070 Ti Super/7900XT level raster because if it only has 64CU's (TPU's placeholder page even says 56) then it will be difficult to close the gap to a 84CU card and then surpass it by 30% (difference between 7800XT and 7900XT).
It will have at least 64 CUs, maybe more, the 7800 XT already has 60 CUs, stay realistic.
Yes 290X launched in October 2013 and while Nvidia did release both the 780 Ti and the first Titan a month later those cards were more expensive while not being a whole lot faster. Titan was only miniscule 3% faster while costing obscene (for a gaming card at the time) 999 while 780 Ti was more reasonable 699 but still only 4% faster.
4% faster is 4% faster, that's far away from a W for AMD. If you want a W you must be clearly faster and not 4% slower. The 780 Ti was solely released to beat the 290X, which it did - and prices never matter for Enthusiast cards, we all know that, 500, 700, tomato, tamoto. Most people bought the 780 Ti over it. And also, the 290 vanilla easily outsold the 290X, AMD usually cut themselves back then by releasing a card which was 100$ less with just 256 shaders shaved off. They did some weird decisions back then, which since RX 7000 times they stopped doing. 6800 XT had only 512 shaders less than 6900 XT and also 300$ less msrp, another mistake by AMD. But the 7900 XT is 20% slower than the XTX due to them also shaving the bus off by 64 bit and reducing clocks perhaps as well. That's how long AMD needed to learn proper "product segmentation" but then again the 7900 XT was overpriced at launch and it needed months for them to correct the price.
290X at 549 remained the bang for buck choice until Nvidia released GTX 980 in September 2014 for also 549 that beat the RX 290 by a more convincing 13%.
Not for the vast majority of people, due to Nvidias mindshare most people still bought the 780 Ti over it, and then even cards like the 780 vanilla which was slower and had less vram. Lastly the 290X didn't even compete well with his own brother 290 vanilla which had nearly the same performance for 100$ less.
It wasn't until the middle of 2015 when Nvidia released 980 Ti for 649 that convincingly beat the 290X by 28% (and 390X by 21%) at much lower power consumption. So essentially 290X had at least 12 months of being the best value high end card.
No, the 980 Ti was released to compete with the Fury X, this is already a different generation and has not much to do with the 290X.
 
Joined
Aug 21, 2013
Messages
1,921 (0.46/day)
It will have at least 64 CUs, maybe more, the 7800 XT already has 60 CUs, stay realistic.
Unless AMD changed CU design it cant be more than 64. That is the limit for the die size they're going with. This has been the case since Vega.
The higher end variant was canned. Presumably this would have been the 80+ CU die.
4% faster is 4% faster, that's far away from a W for AMD. If you want a W you must be clearly faster and not 4% slower. The 780 Ti was solely released to beat the 290X, which it did - and prices never matter for Enthusiast cards, we all know that, 500, 700, tomato, tamoto. Most people bought the 780 Ti over it.
4% is so little it may as well be a tie. And i disagree on prices. Those who did not care about price bought the 999 Titan that had double the VRAM of 780 Ti.
Not for the vast majority of people, due to Nvidias mindshare most people still bought the 780 Ti over it, and then even cards like the 780 vanilla which was slower and had less vram. Lastly the 290X didn't even compete well with his own brother 290 vanilla which had nearly the same performance for 100$ less.
Not arguing that. Nvidia even back then had mindshare. 290 was clearly the smart buy.
No, the 980 Ti was released to compete with the Fury X, this is already a different generation and has not much to do with the 290X.
Let me guess - another W for Nvidia because 980 Ti was 2% faster than Fury X?
Not quite sure how it was supposed to compete with Fury X when it released after 980 Ti...
Yes in terms of performance and price they were very close but i dont consider under 5% anything but a tie and under 15% anything but underwhelming. I only consider a card soundly beaten when it's 30% or more.
 

AcE

Joined
Dec 3, 2024
Messages
116 (12.89/day)
Unless AMD changed CU design it cant be more than 64. That is the limit for the die size they're going with. This has been the case since Vega.
That's very old info and that was an "engine amount limit", that it "only" topped out at 64 CUs was a side effect of not being able to use more graphics engines (or clusters). Since RDNA times AMD does not have a hard limit anymore, Big Navi already had 80 CUs and RDNA 3 topped out at 96.

Edit: you can see it here: https://www.techpowerup.com/gpu-specs/amd-fiji.g774#gallery-3
The maximum amount of engines was 4. Seeing that GCN already had problems feeding 2816 shaders and much more so with 4096, it was fine that it topped out at 64. More was never needed back then and when it was, RDNA lifted that limit already.
4% is so little it may as well be a tie. And i disagree on prices. Those who did not care about price bought the 999 Titan that had double the VRAM of 780 Ti.
Yes and? You were talking about a W for AMD, this is not a case with being 4% slower. Being the "budget king" is nothing special, they did this most of the times. Toyota is also better in price to performance than Mercedes (though that analogy sucks because Toyota still sells a lot of cars, whereas AMD doesn't sell many GPUs compared to Nvidia).
Let me guess - another W for Nvidia because 980 Ti was 2% faster than Fury X?
Tied when you compare Ref vs Ref, yes but the 980 Ti custom models were far ahead, so it was more like 10%, 20% with OC. The 980 Ti was simply better. Especially if you didn't play in 4K, the Fury X had issues with its usage in lower resolutions due to having too many shaders and suboptimal DX11 drivers.
Not quite sure how it was supposed to compete with Fury X when it released after 980 Ti...
The Fury X and 980 Ti released at about the same time. Maybe you forgot that Fury X was part of R9 300 gen and Maxwell, GTX 900 series, was the competitor to that. Those were the GPUs for 2014/2015.
Yes in terms of performance and price they were very close but i dont consider under 5% anything but a tie and under 15% anything but underwhelming.
A tie, but we were talking about "Ws for AMD" and AMD did not have that in that generation, ties or winning "budget king" awards, don't help much and Nvidia sold much more, so it's basically a W for Nvidia. But they sold less than AMD in HD 5000 times, that's one of the rare "true" Ws AMD (back then ATI) had against NV.
 
Last edited:
Joined
Aug 21, 2013
Messages
1,921 (0.46/day)
That's very old info. Since RDNA times AMD does not have a hard limit anymore, Big Navi already had 80 CUs and RDNA 3 topped out at 96.
Like i said - RDNA4 high end was canned. There will be no 80-96CU die this time. I was not talking about hard limit overall. If they make a massive die for UDNA1 it could have 128CU's for all we know. I was talking about the die size they're going with having 64CU max. It's a math thing. I head they might have doubled RT units per CU. Before it was 1:1 and now it may be 2:1 which would explain the reported massive RT perf increase if RT unit count goes from say 64 to 128.
Yes and? You were talking about a W for AMD, this is not a case with being 4% slower. Being the "budget king" is nothing special, they did this most of the times. Toyota is also better in price to performance than Mercedes (though that analogy sucks because Toyota still sells a lot of cars, whereas AMD doesn't sell many GPUs compared to Nvidia).
Still a better deal with 150 cheaper or even 250 cheaper of we account for 290 non-X. None of it mattered tho because people still bought Nvidia. Im not arguing against that.
Tied when you compare Ref vs Ref, yes but the 980 Ti custom models were far ahead, so it was more like 10%, 20% with OC. The 980 Ti was simply better. Especially if you didn't play in 4K, the Fury X had issues with its usage in lower resolutions due to having too many shaders and suboptimal DX11 drivers.
Fury X was maxed out from the get go for sure. Yes i remember 980 Ti having good OC headroom. Back when Nvidia still allowed vBIOS modding. They locked it down with 10 series.
The Fury X and 980 Ti released at about the same time. Maybe you forgot that Fury X was part of R9 300 gen and Maxwell, GTX 900 series, was the competitor to that. Those were the GPUs for 2014/2015.
Unless Nvidia had inside knowledge of Fury X performance (980 Ti launched nearly a moth before) i dont see how that's the case. Yes series vs series but SKU vs SKU AMD had the advantage of launching later and thus they likely adjusted their price to match 980 Ti, for better or for worse.
 

AcE

Joined
Dec 3, 2024
Messages
116 (12.89/day)
I was talking about the die size they're going with having 64CU max. It's a math thing.
I didn't challenge that, I only challenged your notion that it would have "less" CUs than the predecessor, which will 100% not be the case. 60-64 is realistic, yes, I didn't say otherwise.
I head they might have doubled RT units per CU. Before it was 1:1 and now it may be 2:1 which would explain the reported massive RT perf increase if RT unit count goes from say 64 to 128.
It's a case of Ray Accelerator vs a real RT Unit, and the RA was only a additional part of the TMU, the RT Unit in RDNA 4 will probably more be like Nvidias, its own unit, not shared with the TMUs, and way bigger. It's not about amount, it's about size and capability. That's 1 RT Core per double-unit I guess, so around 32 RT cores max.
Unless Nvidia had inside knowledge of Fury X performance (980 Ti launched nearly a moth before) i dont see how that's the case.
22 days later is still about the same time, I never said they launched in the exact same nano-second. ;) And yes they always have insider info, they always know even months ahead how fast the competition will be, the only thing they don't know is pricing, I heard a lot of times that pricing is always a last second thing, whereas performance is the exact opposite.

And because AMD saw how fast the 980 Ti exactly is, they pushed Fury X to 1050 MHz, to the absolute limit, I think 1000 MHz would've been the regular clock for it, and also 250 W max and not 275 W. Even like that it was underwhelming, and 4 GB vram didn't help either. They tried to cushion this with "HBM is better" marketing, but nobody fell for that. The only good thing about HBM was that it didn't eat a lot of power in multi-monitor, idle and video-play usage (things AMD had problems with back then, and partially even still today).
 
Last edited:
Joined
Aug 12, 2019
Messages
2,223 (1.14/day)
Location
LV-426
System Name Custom
Processor i9 9900k
Motherboard Gigabyte Z390 arous master
Cooling corsair h150i
Memory 4x8 3200mhz corsair
Video Card(s) Galax RTX 3090 EX Gamer White OC
Storage 500gb Samsung 970 Evo PLus
Display(s) MSi MAG341CQ
Case Lian Li Pc-011 Dynamic
Audio Device(s) Arctis Pro Wireless
Power Supply 850w Seasonic Focus Platinum
Mouse Logitech G403
Keyboard Logitech G110
Joined
May 29, 2017
Messages
354 (0.13/day)
Location
Latvia
Processor AMD Ryzen™ 7 5700X
Motherboard ASRock B450M Pro4-F R2.0
Cooling Arctic Freezer A35
Memory Lexar Thor 32GB 3733Mhz CL16
Video Card(s) PURE AMD Radeon™ RX 7800 XT 16GB
Storage Lexar NM790 2TB + Lexar NS100 2TB
Display(s) HP X34 UltraWide IPS 165Hz
Case Zalman i3 Neo + Arctic P12
Audio Device(s) Airpulse A100 + Edifier T5
Power Supply Sharkoon Rebel P20 750W
Mouse Cooler Master MM730
Keyboard Krux Atax PRO Gateron Yellow
Software Windows 11 Pro
Stated performance seems to be too good to be true also TDP is stated as 220w and 265w.

Let's hope that RX 8800 can mach RX 7900 XT raster and RTX 4070Ti RT performance for 499-549$.

What's really bad current generation RTX 4000 series holds it's value very well after two years. The worst price/performance ratio generation ever released (also consumer iq test)
 
Last edited:

AcE

Joined
Dec 3, 2024
Messages
116 (12.89/day)
What's really bad current generation RTX 4000 series holds it's value very well after two years. The worst price/performance ratio generation ever released (also consumer iq test)
30 gen was worse but it was due to mining and scalping.
 
Top