Friday, January 24th 2025

New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090
A set of newly leaked benchmarks has revealed the performance capabilities of NVIDIA's upcoming RTX 5080 GPU. Scheduled to launch alongside the RTX 5090 on January 30, the GPU was spotted on Geekbench under OpenCL and Vulkan benchmark tests—and based on the performance, it might not make it among the best graphics cards. The tested device was an MSI-branded RTX 5080 labeled as model MS-7E62. This setup had AMD's Ryzen 7 9800X3D processor, which many consider one of the best CPUs for gaming. It also included an MSI MPG 850 Edge TI Wi-Fi motherboard and 32 GB of DDR5-6000 memory.
The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.
That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
Sources:
DigitalTrends, TruthPhoenixV
The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.
That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
215 Comments on New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090
images.nvidia.com/aem-dam/Solutions/geforce/ada/news/rtx-40-series-graphics-cards-announcements/nvidia-ada-lovelace-architecture-processing-throughput-performance-efficiency.jpg
At least in workstation tasks and games that had competent implementations of SLI, the GTX 780 was not faster than the GTX 690, and the GTX 680 was not faster than the GTX 590. Based on this history, there would be little reason to expect the 5080 to beat the 4090, at least not consistently.
Meanwhile, the RTX 3090 is effectively a misnamed Titan (i.e. barely faster than the 80 Ti of the same generation, but with extra VRAM for workstation tasks). The GTX 1070 Ti was about 5% faster than the Titan X. The RTX 3070 Ti was about 5% faster than the Titan RTX. The 4070 Ti being about 5% faster than the 3090 puts the 3090 in the same group as these Titans, not in the same group as either the 4090 or older dual-die 90-class Geforce cards.
That said, the 5080 is still worse than it probably should have been. Like the 5080 vs the 4080, the 2080 was built on a refined version of the same manufacturing process as the 1080, but was a much larger die, added Ray Tracing and Tensor cores, and was about 30% faster on average, albeit also ~15% more expensive and generally regarded as bad value as a result (especially as the GTX 1080 Ti was basically the same speed and price, and had 11GB VRAM). The 5080 is the same price as the 4080 Super, about the same die size, and barely 10% faster, so its value uplift is similar to the underwhelming RTX 2080's was, but it's not as bad as the (IMO unfair, for the reasons above) comparison based on the performance uplift of the RTX 4080 over the 3090 makes it look. It's built on a minor refinement of the same node, and Nvidia hasn't significantly changed their CUDA architecture for a decade. Nvidia's previous generational performance uplifts since Maxwell have mostly come from more advanced manufacturing processes which allowed increases in core count and clock frequency, not from architectural improvements. While they have obviously made some changes to the architecture, the most significant changes have been to add tensor and RT cores, to add cache, to support new types of VRAM, and to improve encoding; rather than to improve the design of the CUDA cores, ROPs, and TMUs, which are responsible for rasterisation performance.
The 5080's main problem is the lack of a 9090 XT to show it its place. Simple as that. It's surely weaker than we all wanted it to be but it's still not totally stagnant.
Educate me on why high framerate matters in games that don't worry about input lag in the first place.
But the vast majority will not understand it like that, they'll just think haha ' free' frames. The experience of playing something at 60 FPS with mfg that is in fact running at 15 is going to be a first for them, and then they'll learn.
Or it doesn't matter for a particular game and the 60 "FPS" is identical to me as the 30 actual FPS is. Either floaty laggy gameplay is the worst and I need more actual framerate or it is perfectly unnoticeable and the framerate also doesn't matter, in both scenarios the benefits of FG seem to disappear.
I can see the benefits of DLSS in image quality, faster FPS, etc but frame generation just sounds like a solution in search of a problem.
I have patience ;)
Aside from that, differences in memory buses are more a result of RAM technology and cache design, rather than directly indicating a GPU's performance tier.
For example, the RTX 3060 Ti had a 256-bit bus, but competed against the RX 6700 XT which had a 192-bit bus and extra cache. The RTX 4070 was the same or higher tier of the next generation, and similar to the 6700 XT, had a 192-bit bus with extra cache. The 320-bit RTX 3080 was slower and had much less VRAM capacity than the 256-bit RX 6900 XT.
Bus width is part of the comparison, for sure, but I don't think it makes sense to base judgements of a GPU on bus width alone, without also accounting for the cache or the type and capacity of VRAM connected to it.
Either way, I can still concede the argument that the RTX 3080 was overtuned. But even if had been based on the GA103 die (which is what Nvidia had allegedly originally intended, before realising that Samsung 8nm yields were worse than expected, and Samsung supposedly gave them a better deal on GA102), the RTX 3090 would have still only been about 15% faster than the 3080.
Plus, the RTX 3090 Ti didn't have double-layer VRAM, and still wasn't that much faster than the 3090, and had atrocious efficiency, while costing a ridiculous $2000. I don't agree at all with the implication here that being the fastest GPU justifies charging arbitrarily high prices. The RTX 4090 had even less competition than the RTX 3090, and delivered a huge uplift over the RTX 4080 (which was itself significantly faster than the RTX 3090, and significantly more expensive than the 3080), despite the 4090 not being much more expensive than the 3090. The RTX 4090 actually justified its price compared to the 4080.
The RTX 3090 was just bad except for mining and AI, and it doesn't get anywhere as much criticism as it deserves (I guess at least it was significantly cheaper than the Titan RTX? But 24GB VRAM wasn't as revolutionary as it was the generation before, and the Titan supported a few Quadro/Pro driver features which the 3090 didn't). The 6900 XT was 90% as fast as the RTX 3090 and more efficient, for 2/3 the price, while the RX 7900 XTX was only about 80% as fast as the 4090 and didn't have an efficiency advantage.
...
I agree with your point about the Titan RTX's power limit, but that's also applicable to most other Titan GPUs, most of which had the same TDP as their 80 Ti counterparts. A stock Titan X Pascal was often slower than a 1080 Ti, but had more cores and VRAM and could be significantly faster if overclocked with a good enough cooler. I definitely agree with that.
I would love it if the RX 9070 XT is able to match the RTX 4080 Super, as some (possibly optimistic?) leaks have indicated. If it's <$600 and only 5-10% slower, maybe AMD actually will have something to show the 5080 its place?
It would still be a much more definitive showing if AMD had a 9080 XT which matches the 5080 at a lower price, and a 9090 XT which beats it while still being substantially cheaper than the 5090.