Friday, January 24th 2025

New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

A set of newly leaked benchmarks has revealed the performance capabilities of NVIDIA's upcoming RTX 5080 GPU. Scheduled to launch alongside the RTX 5090 on January 30, the GPU was spotted on Geekbench under OpenCL and Vulkan benchmark tests—and based on the performance, it might not make it among the best graphics cards. The tested device was an MSI-branded RTX 5080 labeled as model MS-7E62. This setup had AMD's Ryzen 7 9800X3D processor, which many consider one of the best CPUs for gaming. It also included an MSI MPG 850 Edge TI Wi-Fi motherboard and 32 GB of DDR5-6000 memory.

The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.
The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.

That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
Sources: DigitalTrends, TruthPhoenixV
Add your own comment

215 Comments on New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

#201
Macro Device
Hxxwe will talk Monday lol. You said a 4090 is at least 10% faster overall -“full stop” lol
And I got my confirmation.
Posted on Reply
#202
Hxx
Macro DeviceAnd I got my confirmation.
You called it 10% . Disappointing but at least much better value than a 4090
Posted on Reply
#203
Vayra86
HxxYou called it 10%
At least. Seems deadly accurate to me. No surprises honestly given the shader count, I'd even say its a (very small) miracle 5080 comes this close, then again, it also needs a lot of juice to get there.
HxxDisappointing but at least much better value than a 4090
Not sure, you're still missing a lot of bus width and VRAM, and its still $1k,- for what is a heavily cut down chip.
Posted on Reply
#204
Hxx
Vayra86At least. Seems deadly accurate to me. No surprises honestly given the shader count, I'd even say its a (very small) miracle 5080 comes this close, then again, it also needs a lot of juice to get there.


Not sure, you're still missing a lot of bus width and VRAM, and its still $1k,- for what is a heavily cut down chip.
Haven’t read the whole review but thought 5080 is more energy efficient than a 4090 no?
Posted on Reply
#205
Vayra86
HxxHaven’t read the whole review but thought 5080 is more energy efficient than a 4090 no?
10% better probably because it has to carry just 16GB and smaller bus. We saw similar with the 4080 being the most efficient GPU in the stack: large enough to not require high clocking for good results (lower range cards are generally boosting higher), not too big to waste resources like VRAM that still need power.
Posted on Reply
#206
x4it3n
The 5080 was never going to be faster than the 4090 with 5% more CUDA Cores guys... Unless they modified the architecture to get much better IPC like maybe 30% it was never going to happen. I really wish Blackwell was a step up from Lovelace but except MFG we're not getting anything really new as of now. Even RT Cores are not truly more powerful at same core count like they were when Ampere or Lovelace released.

images.nvidia.com/aem-dam/Solutions/geforce/ada/news/rtx-40-series-graphics-cards-announcements/nvidia-ada-lovelace-architecture-processing-throughput-performance-efficiency.jpg
Posted on Reply
#207
Dragam1337
x4it3nThe 5080 was never going to be faster than the 4090 with 5% more CUDA Cores guys... Unless they modified the architecture to get much better IPC like maybe 30% it was never going to happen. I really wish Blackwell was a step up from Lovelace but except MFG we're not getting anything really new as of now. Even RT Cores are not truly more powerful at same core count like they were when Ampere or Lovelace released.

images.nvidia.com/aem-dam/Solutions/geforce/ada/news/rtx-40-series-graphics-cards-announcements/nvidia-ada-lovelace-architecture-processing-throughput-performance-efficiency.jpg
Tbf there was only 1 guy claiming the 5080 was gonna be faster than the 4090, and he obviously has no clue about this stuff.
Posted on Reply
#208
x4it3n
Dragam1337Tbf there was only 1 guy claiming the 5080 was gonna be faster than the 4090, and he obviously has no clue about this stuff.
Yeah we all wanted to believe that NVIDIA would make something like they did with Maxwell, but we all knew they were too greedy nowadays and it would all about AI anyway. The fact that they stayed on TSMC 4nm says a lot on how much they really wanted to improve raw performance & efficiency...
Posted on Reply
#209
Speedyblupi
N/AWell, I expected 5080 to be 5% faster than a 4090 similar to how 4070 Ti was at 3090 level
The 3090 was barely 10% faster than the 3080 and didn't deserve its name or price tag. The RTX 3090 was a large AD102 die at 628mm^2, but it was only so big because it was based on a cheap 8nm manufacturing process; if it had been built on N7 or N6, which AMD used for its competing RX 6000-series, the RTX 3090 would have been around 450-500mm^2, similar to GP102 (used for the GTX 1080 Ti and Titan Pascal, which was the last time Nvidia used a cutting-edge node for a new GPU generation). And then Nvidia released the RTX 4090 with a 608mm^2 die on 4N, which is by far the largest consumer GPU die they have ever built on a cutting-edge node, and justifies its position in the higher "90" performance class which they had previously (before the 3090) only used for dual-die graphics cards.

At least in workstation tasks and games that had competent implementations of SLI, the GTX 780 was not faster than the GTX 690, and the GTX 680 was not faster than the GTX 590. Based on this history, there would be little reason to expect the 5080 to beat the 4090, at least not consistently.

Meanwhile, the RTX 3090 is effectively a misnamed Titan (i.e. barely faster than the 80 Ti of the same generation, but with extra VRAM for workstation tasks). The GTX 1070 Ti was about 5% faster than the Titan X. The RTX 3070 Ti was about 5% faster than the Titan RTX. The 4070 Ti being about 5% faster than the 3090 puts the 3090 in the same group as these Titans, not in the same group as either the 4090 or older dual-die 90-class Geforce cards.

That said, the 5080 is still worse than it probably should have been. Like the 5080 vs the 4080, the 2080 was built on a refined version of the same manufacturing process as the 1080, but was a much larger die, added Ray Tracing and Tensor cores, and was about 30% faster on average, albeit also ~15% more expensive and generally regarded as bad value as a result (especially as the GTX 1080 Ti was basically the same speed and price, and had 11GB VRAM). The 5080 is the same price as the 4080 Super, about the same die size, and barely 10% faster, so its value uplift is similar to the underwhelming RTX 2080's was, but it's not as bad as the (IMO unfair, for the reasons above) comparison based on the performance uplift of the RTX 4080 over the 3090 makes it look.
N/Awho knew Nvidia will have the audacity to push 50 series as a 4x frame gen patch with faster memory.
It's built on a minor refinement of the same node, and Nvidia hasn't significantly changed their CUDA architecture for a decade. Nvidia's previous generational performance uplifts since Maxwell have mostly come from more advanced manufacturing processes which allowed increases in core count and clock frequency, not from architectural improvements. While they have obviously made some changes to the architecture, the most significant changes have been to add tensor and RT cores, to add cache, to support new types of VRAM, and to improve encoding; rather than to improve the design of the CUDA cores, ROPs, and TMUs, which are responsible for rasterisation performance.
Posted on Reply
#210
Macro Device
SpeedyblupiRTX 3070 Ti was about 5% faster than the Titan RTX.
I'm not disagreeing with you but that Titan is so heavily power constrained it's not even funny. One could easily squeeze another 20 (!) % performance from this chip after overclocking it (required cooling system replacement). This puts Titan ahead, albeit it's significantly cheaper to get a somewhat faster 3080 at this point. I have seen overclocked 2080 Tis beating an overclocked 3070 Ti by 10+ % as well just for the same reason.
SpeedyblupiThe 3090 was barely 10% faster than the 3080 and didn't deserve its name
Mostly thanks to 3080 being slightly overtuned (the first xx80 ever to come with a 320 bit bus) and double-layer VRAM hindering the 3090 die's power budget.
Speedyblupior price tag
It deserved all the freedom in the world for that. There was zero GPUs faster than it at the time and in the free market, the best decides how much they cost. Not the other way around. If, say, AMD released some 1337-dollar 6999 XT (say, 120 CUs @ 2.3 GHz and 24 GB VRAM @ 18 GT/s) that beat 3090 convincingly then yes, 1500 USD is a stretch. But never happened.

The 5080's main problem is the lack of a 9090 XT to show it its place. Simple as that. It's surely weaker than we all wanted it to be but it's still not totally stagnant.
Posted on Reply
#211
TechBuyingHavoc
x4it3nI agree with you 100%. If we remove MFG the RTX 50s will not provide any real improvements this generation. And except the 5090 that is 30-40% faster the rest will be 10-20% at best which is ridiculous and very disappointing for a new generation. Nvidia didn't even try to make things better, and they also cheaped out by using a 4nm TSMC node, we were all expecting a N3P-like node for efficiency but not even... Imo people playing Multi-player games should definitely not upgrade. For Single-Player games and mostly 3rd Person like The Last of Us, Tomb Raider, Horizon series, God of War, etc.) that can be a different story since they no dot require a very fast input lag. But a real 4K@120fps will always be better than one with FG/MFG for sure.
I don't understand the benefit of FG/MFG here. If a game is slow enough that input lag is not a concern (Indiana Jones, etc), why does the framerate matter at all (beyond a certain threshold)? If a game requires low input lag (any FPS, multiplayer games, etc), FG/MFG is not good enough, just plain inferior.

Educate me on why high framerate matters in games that don't worry about input lag in the first place.
Posted on Reply
#212
Vayra86
TechBuyingHavocI don't understand the benefit of FG/MFG here. If a game is slow enough that input lag is not a concern (Indiana Jones, etc), why does the framerate matter at all (beyond a certain threshold)? If a game requires low input lag (any FPS, multiplayer games, etc), FG/MFG is not good enough, just plain inferior.

Educate me on why high framerate matters in games that don't worry about input lag in the first place.
Well the input lag doesn't get notably worse but the higher framerate does enable smoother images. If you were getting and are fine with 30 fps worth of latency, its also fine if you can then get 60 fps. Not sure about the added benefits of 90-120 then, but still, if you have a high refresh display, why not.

But the vast majority will not understand it like that, they'll just think haha ' free' frames. The experience of playing something at 60 FPS with mfg that is in fact running at 15 is going to be a first for them, and then they'll learn.
Posted on Reply
#213
TechBuyingHavoc
Vayra86Well the input lag doesn't get notably worse but the higher framerate does enable smoother images. If you were getting and are fine with 30 fps worth of latency, its also fine if you can then get 60 fps. Not sure about the added benefits of 90-120 then, but still, if you have a high refresh display, why not.

But the vast majority will not understand it like that, they'll just think haha ' free' frames. The experience of playing something at 60 FPS with mfg that is in fact running at 15 is going to be a first for them, and then they'll learn.
I agree that 60 "FPS" in your example is fine if I am also fine with 30 FPS worth of latency. But that is just tolerating the situation, I would rather have 40 FPS of latency.

Or it doesn't matter for a particular game and the 60 "FPS" is identical to me as the 30 actual FPS is. Either floaty laggy gameplay is the worst and I need more actual framerate or it is perfectly unnoticeable and the framerate also doesn't matter, in both scenarios the benefits of FG seem to disappear.

I can see the benefits of DLSS in image quality, faster FPS, etc but frame generation just sounds like a solution in search of a problem.
Posted on Reply
#214
Vayra86
I don't know man, image quality is also increased quite a bit by having sufficient FPS. Especially motion resolution. The problem with that then was that earlier versions of DLSS had all kinds of issues when you moved the viewport. Those issues are slowly fading away now. That really is the kind of progress we should be cheering at. The only remaining problem then is... having it universally applicable and not on Nvidia's say so.

I have patience ;)
Posted on Reply
#215
Speedyblupi
Macro DeviceMostly thanks to 3080 being slightly overtuned (the first xx80 ever to come with a 320 bit bus) and double-layer VRAM hindering the 3090 die's power budget.
It's technically true that the 3080 is the only xx80 with a 320-bit bus, but the GTX 480, 580, and 780 were all 384-bit, wider than 320-bit, so the point doesn't really stand, IMO.

Aside from that, differences in memory buses are more a result of RAM technology and cache design, rather than directly indicating a GPU's performance tier.
For example, the RTX 3060 Ti had a 256-bit bus, but competed against the RX 6700 XT which had a 192-bit bus and extra cache. The RTX 4070 was the same or higher tier of the next generation, and similar to the 6700 XT, had a 192-bit bus with extra cache. The 320-bit RTX 3080 was slower and had much less VRAM capacity than the 256-bit RX 6900 XT.
Bus width is part of the comparison, for sure, but I don't think it makes sense to base judgements of a GPU on bus width alone, without also accounting for the cache or the type and capacity of VRAM connected to it.

Either way, I can still concede the argument that the RTX 3080 was overtuned. But even if had been based on the GA103 die (which is what Nvidia had allegedly originally intended, before realising that Samsung 8nm yields were worse than expected, and Samsung supposedly gave them a better deal on GA102), the RTX 3090 would have still only been about 15% faster than the 3080.

Plus, the RTX 3090 Ti didn't have double-layer VRAM, and still wasn't that much faster than the 3090, and had atrocious efficiency, while costing a ridiculous $2000.
Macro DeviceIt deserved all the freedom in the world for that. There was zero GPUs faster than it at the time and in the free market, the best decides how much they cost.
I don't agree at all with the implication here that being the fastest GPU justifies charging arbitrarily high prices. The RTX 4090 had even less competition than the RTX 3090, and delivered a huge uplift over the RTX 4080 (which was itself significantly faster than the RTX 3090, and significantly more expensive than the 3080), despite the 4090 not being much more expensive than the 3090. The RTX 4090 actually justified its price compared to the 4080.

The RTX 3090 was just bad except for mining and AI, and it doesn't get anywhere as much criticism as it deserves (I guess at least it was significantly cheaper than the Titan RTX? But 24GB VRAM wasn't as revolutionary as it was the generation before, and the Titan supported a few Quadro/Pro driver features which the 3090 didn't). The 6900 XT was 90% as fast as the RTX 3090 and more efficient, for 2/3 the price, while the RX 7900 XTX was only about 80% as fast as the 4090 and didn't have an efficiency advantage.

...
I agree with your point about the Titan RTX's power limit, but that's also applicable to most other Titan GPUs, most of which had the same TDP as their 80 Ti counterparts. A stock Titan X Pascal was often slower than a 1080 Ti, but had more cores and VRAM and could be significantly faster if overclocked with a good enough cooler.
Macro DeviceThe 5080's main problem is the lack of a 9090 XT to show it its place. Simple as that. It's surely weaker than we all wanted it to be but it's still not totally stagnant.
I definitely agree with that.
I would love it if the RX 9070 XT is able to match the RTX 4080 Super, as some (possibly optimistic?) leaks have indicated. If it's <$600 and only 5-10% slower, maybe AMD actually will have something to show the 5080 its place?
It would still be a much more definitive showing if AMD had a 9080 XT which matches the 5080 at a lower price, and a 9090 XT which beats it while still being substantially cheaper than the 5090.
Posted on Reply
Add your own comment
Mar 3rd, 2025 09:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts