• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

Joined
Oct 19, 2022
Messages
363 (0.42/day)
Location
Los Angeles, CA
Processor AMD Ryzen 7 9800X3D (+PBO 5.4GHz)
Motherboard MSI MPG X870E Carbon Wifi
Cooling ARCTIC Liquid Freezer II 280 A-RGB
Memory 2x32GB (64GB) G.Skill Trident Z Royal @ 6200MHz 1:1 (30-38-38-30)
Video Card(s) MSI GeForce RTX 4090 SUPRIM Liquid X
Storage Crucial T705 4TB (PCIe 5.0) w/ Heatsink + Samsung 990 PRO 2TB (PCIe 4.0) w/ Heatsink
Display(s) AORUS FO32U2P 4K QD-OLED 240Hz (DP 2.1 UHBR20 80Gbps)
Case CoolerMaster H500M (Mesh)
Audio Device(s) AKG N90Q w/ AudioQuest DragonFly Red (USB DAC)
Power Supply Seasonic Prime TX-1600 Noctua Edition (1600W 80Plus Titanium) ATX 3.1 & PCIe 5.1
Mouse Logitech G PRO X SUPERLIGHT
Keyboard Razer BlackWidow V3 Pro
Software Windows 10 64-bit
I agree with all your points, though 30% is on the very low end of the scale, and will (understandably) make alot of people skip the gen :)
I agree with you 100%. If we remove MFG the RTX 50s will not provide any real improvements this generation. And except the 5090 that is 30-40% faster the rest will be 10-20% at best which is ridiculous and very disappointing for a new generation. Nvidia didn't even try to make things better, and they also cheaped out by using a 4nm TSMC node, we were all expecting a N3P-like node for efficiency but not even... Imo people playing Multi-player games should definitely not upgrade. For Single-Player games and mostly 3rd Person like The Last of Us, Tomb Raider, Horizon series, God of War, etc.) that can be a different story since they no dot require a very fast input lag. But a real 4K@120fps will always be better than one with FG/MFG for sure.
 
Joined
Feb 24, 2023
Messages
3,625 (4.91/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
we will talk Monday lol. You said a 4090 is at least 10% faster overall -“full stop” lol
And I got my confirmation.
1738160104794.png
 
Joined
Sep 17, 2014
Messages
23,428 (6.13/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
You called it 10%
At least. Seems deadly accurate to me. No surprises honestly given the shader count, I'd even say its a (very small) miracle 5080 comes this close, then again, it also needs a lot of juice to get there.

Disappointing but at least much better value than a 4090
Not sure, you're still missing a lot of bus width and VRAM, and its still $1k,- for what is a heavily cut down chip.
 

Hxx

Joined
Dec 5, 2013
Messages
350 (0.09/day)
At least. Seems deadly accurate to me. No surprises honestly given the shader count, I'd even say its a (very small) miracle 5080 comes this close, then again, it also needs a lot of juice to get there.


Not sure, you're still missing a lot of bus width and VRAM, and its still $1k,- for what is a heavily cut down chip.
Haven’t read the whole review but thought 5080 is more energy efficient than a 4090 no?
 
Joined
Sep 17, 2014
Messages
23,428 (6.13/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Haven’t read the whole review but thought 5080 is more energy efficient than a 4090 no?
10% better probably because it has to carry just 16GB and smaller bus. We saw similar with the 4080 being the most efficient GPU in the stack: large enough to not require high clocking for good results (lower range cards are generally boosting higher), not too big to waste resources like VRAM that still need power.
 
Joined
Oct 19, 2022
Messages
363 (0.42/day)
Location
Los Angeles, CA
Processor AMD Ryzen 7 9800X3D (+PBO 5.4GHz)
Motherboard MSI MPG X870E Carbon Wifi
Cooling ARCTIC Liquid Freezer II 280 A-RGB
Memory 2x32GB (64GB) G.Skill Trident Z Royal @ 6200MHz 1:1 (30-38-38-30)
Video Card(s) MSI GeForce RTX 4090 SUPRIM Liquid X
Storage Crucial T705 4TB (PCIe 5.0) w/ Heatsink + Samsung 990 PRO 2TB (PCIe 4.0) w/ Heatsink
Display(s) AORUS FO32U2P 4K QD-OLED 240Hz (DP 2.1 UHBR20 80Gbps)
Case CoolerMaster H500M (Mesh)
Audio Device(s) AKG N90Q w/ AudioQuest DragonFly Red (USB DAC)
Power Supply Seasonic Prime TX-1600 Noctua Edition (1600W 80Plus Titanium) ATX 3.1 & PCIe 5.1
Mouse Logitech G PRO X SUPERLIGHT
Keyboard Razer BlackWidow V3 Pro
Software Windows 10 64-bit
The 5080 was never going to be faster than the 4090 with 5% more CUDA Cores guys... Unless they modified the architecture to get much better IPC like maybe 30% it was never going to happen. I really wish Blackwell was a step up from Lovelace but except MFG we're not getting anything really new as of now. Even RT Cores are not truly more powerful at same core count like they were when Ampere or Lovelace released.

 
Joined
Mar 29, 2023
Messages
1,288 (1.82/day)
Processor Ryzen 7800x3d
Motherboard Asus B650e-F Strix
Cooling Corsair H150i Pro
Memory Gskill 32gb 6000 mhz cl30
Video Card(s) RTX 4090 Gaming OC
Storage Samsung 980 pro 2tb, Samsung 860 evo 500gb, Samsung 850 evo 1tb, Samsung 860 evo 4tb
Display(s) Acer XB321HK
Case Coolermaster Cosmos 2
Audio Device(s) Creative SB X-Fi 5.1 Pro + Logitech Z560
Power Supply Corsair AX1200i
Mouse Logitech G700s
Keyboard Logitech G710+
Software Win10 pro
The 5080 was never going to be faster than the 4090 with 5% more CUDA Cores guys... Unless they modified the architecture to get much better IPC like maybe 30% it was never going to happen. I really wish Blackwell was a step up from Lovelace but except MFG we're not getting anything really new as of now. Even RT Cores are not truly more powerful at same core count like they were when Ampere or Lovelace released.


Tbf there was only 1 guy claiming the 5080 was gonna be faster than the 4090, and he obviously has no clue about this stuff.
 
Joined
Oct 19, 2022
Messages
363 (0.42/day)
Location
Los Angeles, CA
Processor AMD Ryzen 7 9800X3D (+PBO 5.4GHz)
Motherboard MSI MPG X870E Carbon Wifi
Cooling ARCTIC Liquid Freezer II 280 A-RGB
Memory 2x32GB (64GB) G.Skill Trident Z Royal @ 6200MHz 1:1 (30-38-38-30)
Video Card(s) MSI GeForce RTX 4090 SUPRIM Liquid X
Storage Crucial T705 4TB (PCIe 5.0) w/ Heatsink + Samsung 990 PRO 2TB (PCIe 4.0) w/ Heatsink
Display(s) AORUS FO32U2P 4K QD-OLED 240Hz (DP 2.1 UHBR20 80Gbps)
Case CoolerMaster H500M (Mesh)
Audio Device(s) AKG N90Q w/ AudioQuest DragonFly Red (USB DAC)
Power Supply Seasonic Prime TX-1600 Noctua Edition (1600W 80Plus Titanium) ATX 3.1 & PCIe 5.1
Mouse Logitech G PRO X SUPERLIGHT
Keyboard Razer BlackWidow V3 Pro
Software Windows 10 64-bit
Tbf there was only 1 guy claiming the 5080 was gonna be faster than the 4090, and he obviously has no clue about this stuff.
Yeah we all wanted to believe that NVIDIA would make something like they did with Maxwell, but we all knew they were too greedy nowadays and it would all about AI anyway. The fact that they stayed on TSMC 4nm says a lot on how much they really wanted to improve raw performance & efficiency...
 
Joined
Feb 24, 2021
Messages
187 (0.13/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master
Well, I expected 5080 to be 5% faster than a 4090 similar to how 4070 Ti was at 3090 level
The 3090 was barely 10% faster than the 3080 and didn't deserve its name or price tag. The RTX 3090 was a large AD102 die at 628mm^2, but it was only so big because it was based on a cheap 8nm manufacturing process; if it had been built on N7 or N6, which AMD used for its competing RX 6000-series, the RTX 3090 would have been around 450-500mm^2, similar to GP102 (used for the GTX 1080 Ti and Titan Pascal, which was the last time Nvidia used a cutting-edge node for a new GPU generation). And then Nvidia released the RTX 4090 with a 608mm^2 die on 4N, which is by far the largest consumer GPU die they have ever built on a cutting-edge node, and justifies its position in the higher "90" performance class which they had previously (before the 3090) only used for dual-die graphics cards.

At least in workstation tasks and games that had competent implementations of SLI, the GTX 780 was not faster than the GTX 690, and the GTX 680 was not faster than the GTX 590. Based on this history, there would be little reason to expect the 5080 to beat the 4090, at least not consistently.

Meanwhile, the RTX 3090 is effectively a misnamed Titan (i.e. barely faster than the 80 Ti of the same generation, but with extra VRAM for workstation tasks). The GTX 1070 Ti was about 5% faster than the Titan X. The RTX 3070 Ti was about 5% faster than the Titan RTX. The 4070 Ti being about 5% faster than the 3090 puts the 3090 in the same group as these Titans, not in the same group as either the 4090 or older dual-die 90-class Geforce cards.

That said, the 5080 is still worse than it probably should have been. Like the 5080 vs the 4080, the 2080 was built on a refined version of the same manufacturing process as the 1080, but was a much larger die, added Ray Tracing and Tensor cores, and was about 30% faster on average, albeit also ~15% more expensive and generally regarded as bad value as a result (especially as the GTX 1080 Ti was basically the same speed and price, and had 11GB VRAM). The 5080 is the same price as the 4080 Super, about the same die size, and barely 10% faster, so its value uplift is similar to the underwhelming RTX 2080's was, but it's not as bad as the (IMO unfair, for the reasons above) comparison based on the performance uplift of the RTX 4080 over the 3090 makes it look.

who knew Nvidia will have the audacity to push 50 series as a 4x frame gen patch with faster memory.
It's built on a minor refinement of the same node, and Nvidia hasn't significantly changed their CUDA architecture for a decade. Nvidia's previous generational performance uplifts since Maxwell have mostly come from more advanced manufacturing processes which allowed increases in core count and clock frequency, not from architectural improvements. While they have obviously made some changes to the architecture, the most significant changes have been to add tensor and RT cores, to add cache, to support new types of VRAM, and to improve encoding; rather than to improve the design of the CUDA cores, ROPs, and TMUs, which are responsible for rasterisation performance.
 
Joined
Feb 24, 2023
Messages
3,625 (4.91/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
RTX 3070 Ti was about 5% faster than the Titan RTX.
I'm not disagreeing with you but that Titan is so heavily power constrained it's not even funny. One could easily squeeze another 20 (!) % performance from this chip after overclocking it (required cooling system replacement). This puts Titan ahead, albeit it's significantly cheaper to get a somewhat faster 3080 at this point. I have seen overclocked 2080 Tis beating an overclocked 3070 Ti by 10+ % as well just for the same reason.
The 3090 was barely 10% faster than the 3080 and didn't deserve its name
Mostly thanks to 3080 being slightly overtuned (the first xx80 ever to come with a 320 bit bus) and double-layer VRAM hindering the 3090 die's power budget.
or price tag
It deserved all the freedom in the world for that. There was zero GPUs faster than it at the time and in the free market, the best decides how much they cost. Not the other way around. If, say, AMD released some 1337-dollar 6999 XT (say, 120 CUs @ 2.3 GHz and 24 GB VRAM @ 18 GT/s) that beat 3090 convincingly then yes, 1500 USD is a stretch. But never happened.

The 5080's main problem is the lack of a 9090 XT to show it its place. Simple as that. It's surely weaker than we all wanted it to be but it's still not totally stagnant.
 
Joined
Oct 5, 2024
Messages
213 (1.42/day)
Location
United States of America
I agree with you 100%. If we remove MFG the RTX 50s will not provide any real improvements this generation. And except the 5090 that is 30-40% faster the rest will be 10-20% at best which is ridiculous and very disappointing for a new generation. Nvidia didn't even try to make things better, and they also cheaped out by using a 4nm TSMC node, we were all expecting a N3P-like node for efficiency but not even... Imo people playing Multi-player games should definitely not upgrade. For Single-Player games and mostly 3rd Person like The Last of Us, Tomb Raider, Horizon series, God of War, etc.) that can be a different story since they no dot require a very fast input lag. But a real 4K@120fps will always be better than one with FG/MFG for sure.
I don't understand the benefit of FG/MFG here. If a game is slow enough that input lag is not a concern (Indiana Jones, etc), why does the framerate matter at all (beyond a certain threshold)? If a game requires low input lag (any FPS, multiplayer games, etc), FG/MFG is not good enough, just plain inferior.

Educate me on why high framerate matters in games that don't worry about input lag in the first place.
 
Joined
Sep 17, 2014
Messages
23,428 (6.13/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I don't understand the benefit of FG/MFG here. If a game is slow enough that input lag is not a concern (Indiana Jones, etc), why does the framerate matter at all (beyond a certain threshold)? If a game requires low input lag (any FPS, multiplayer games, etc), FG/MFG is not good enough, just plain inferior.

Educate me on why high framerate matters in games that don't worry about input lag in the first place.
Well the input lag doesn't get notably worse but the higher framerate does enable smoother images. If you were getting and are fine with 30 fps worth of latency, its also fine if you can then get 60 fps. Not sure about the added benefits of 90-120 then, but still, if you have a high refresh display, why not.

But the vast majority will not understand it like that, they'll just think haha ' free' frames. The experience of playing something at 60 FPS with mfg that is in fact running at 15 is going to be a first for them, and then they'll learn.
 
Joined
Oct 5, 2024
Messages
213 (1.42/day)
Location
United States of America
Well the input lag doesn't get notably worse but the higher framerate does enable smoother images. If you were getting and are fine with 30 fps worth of latency, its also fine if you can then get 60 fps. Not sure about the added benefits of 90-120 then, but still, if you have a high refresh display, why not.

But the vast majority will not understand it like that, they'll just think haha ' free' frames. The experience of playing something at 60 FPS with mfg that is in fact running at 15 is going to be a first for them, and then they'll learn.
I agree that 60 "FPS" in your example is fine if I am also fine with 30 FPS worth of latency. But that is just tolerating the situation, I would rather have 40 FPS of latency.

Or it doesn't matter for a particular game and the 60 "FPS" is identical to me as the 30 actual FPS is. Either floaty laggy gameplay is the worst and I need more actual framerate or it is perfectly unnoticeable and the framerate also doesn't matter, in both scenarios the benefits of FG seem to disappear.

I can see the benefits of DLSS in image quality, faster FPS, etc but frame generation just sounds like a solution in search of a problem.
 
Joined
Sep 17, 2014
Messages
23,428 (6.13/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I don't know man, image quality is also increased quite a bit by having sufficient FPS. Especially motion resolution. The problem with that then was that earlier versions of DLSS had all kinds of issues when you moved the viewport. Those issues are slowly fading away now. That really is the kind of progress we should be cheering at. The only remaining problem then is... having it universally applicable and not on Nvidia's say so.

I have patience ;)
 
Joined
Feb 24, 2021
Messages
187 (0.13/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master
Mostly thanks to 3080 being slightly overtuned (the first xx80 ever to come with a 320 bit bus) and double-layer VRAM hindering the 3090 die's power budget.
It's technically true that the 3080 is the only xx80 with a 320-bit bus, but the GTX 480, 580, and 780 were all 384-bit, wider than 320-bit, so the point doesn't really stand, IMO.

Aside from that, differences in memory buses are more a result of RAM technology and cache design, rather than directly indicating a GPU's performance tier.
For example, the RTX 3060 Ti had a 256-bit bus, but competed against the RX 6700 XT which had a 192-bit bus and extra cache. The RTX 4070 was the same or higher tier of the next generation, and similar to the 6700 XT, had a 192-bit bus with extra cache. The 320-bit RTX 3080 was slower and had much less VRAM capacity than the 256-bit RX 6900 XT.
Bus width is part of the comparison, for sure, but I don't think it makes sense to base judgements of a GPU on bus width alone, without also accounting for the cache or the type and capacity of VRAM connected to it.

Either way, I can still concede the argument that the RTX 3080 was overtuned. But even if had been based on the GA103 die (which is what Nvidia had allegedly originally intended, before realising that Samsung 8nm yields were worse than expected, and Samsung supposedly gave them a better deal on GA102), the RTX 3090 would have still only been about 15% faster than the 3080.

Plus, the RTX 3090 Ti didn't have double-layer VRAM, and still wasn't that much faster than the 3090, and had atrocious efficiency, while costing a ridiculous $2000.

It deserved all the freedom in the world for that. There was zero GPUs faster than it at the time and in the free market, the best decides how much they cost.
I don't agree at all with the implication here that being the fastest GPU justifies charging arbitrarily high prices. The RTX 4090 had even less competition than the RTX 3090, and delivered a huge uplift over the RTX 4080 (which was itself significantly faster than the RTX 3090, and significantly more expensive than the 3080), despite the 4090 not being much more expensive than the 3090. The RTX 4090 actually justified its price compared to the 4080.

The RTX 3090 was just bad except for mining and AI, and it doesn't get anywhere as much criticism as it deserves (I guess at least it was significantly cheaper than the Titan RTX? But 24GB VRAM wasn't as revolutionary as it was the generation before, and the Titan supported a few Quadro/Pro driver features which the 3090 didn't). The 6900 XT was 90% as fast as the RTX 3090 and more efficient, for 2/3 the price, while the RX 7900 XTX was only about 80% as fast as the 4090 and didn't have an efficiency advantage.

...
I agree with your point about the Titan RTX's power limit, but that's also applicable to most other Titan GPUs, most of which had the same TDP as their 80 Ti counterparts. A stock Titan X Pascal was often slower than a 1080 Ti, but had more cores and VRAM and could be significantly faster if overclocked with a good enough cooler.

The 5080's main problem is the lack of a 9090 XT to show it its place. Simple as that. It's surely weaker than we all wanted it to be but it's still not totally stagnant.
I definitely agree with that.
I would love it if the RX 9070 XT is able to match the RTX 4080 Super, as some (possibly optimistic?) leaks have indicated. If it's <$600 and only 5-10% slower, maybe AMD actually will have something to show the 5080 its place?
It would still be a much more definitive showing if AMD had a 9080 XT which matches the 5080 at a lower price, and a 9090 XT which beats it while still being substantially cheaper than the 5090.
 
Top