Friday, January 24th 2025
New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090
A set of newly leaked benchmarks has revealed the performance capabilities of NVIDIA's upcoming RTX 5080 GPU. Scheduled to launch alongside the RTX 5090 on January 30, the GPU was spotted on Geekbench under OpenCL and Vulkan benchmark tests—and based on the performance, it might not make it among the best graphics cards. The tested device was an MSI-branded RTX 5080 labeled as model MS-7E62. This setup had AMD's Ryzen 7 9800X3D processor, which many consider one of the best CPUs for gaming. It also included an MSI MPG 850 Edge TI Wi-Fi motherboard and 32 GB of DDR5-6000 memory.
The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.
That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
Sources:
DigitalTrends, TruthPhoenixV
The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.
That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
198 Comments on New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090
Maybe some people fell for it but I didn‘t believe it when he said that at CES.:p
Tik - series RTX 2000 were good
Tok - series RTX 3000 were not so good
Tik - series RTX 4000 are so much better
Tok - series RTX 5000 well, at least they are trying
I´ve been on the computer segment a lot of years (around 1995 at least) and I can say from sure that most of the time
the performance from each generation is more or less 25%, maybe a very few gens have improve a bit more than that but nehh...
I still remember my first MSI FX5200 and my ASUS ATI 9600 XT, now with more than 600W only for the GPU well... things do not look very good hehehe
Second mental exercise for you - what’s high res ? And if it’s 4k then why are there gains at 1440p? Double digit gains I might add but you seem convinced this test shouldn’t be done so I dunno curious to get your thoughts here I wonder what displays TPU is using 4k is probably 240hz but what about 1440p/1080p ? What’s the max nowadays in the consumer space - 500+ hz? I wonder how big the delta would be at lower res high refresh between the cards especially on those pro titles
30-series was on inferior tech (Samsung 8nm vs TSMC 7nm), but it was in fact a generational gain in performance over both Pascal and Turing. The only things that really sucked with the 30-series was the availability due to a multitude of factors (ethereum mining, pandemic, scalping, etc.), and the power draw / transient spikes.
40-series, pricing sucked, but it was certainly a huge improvement in efficiency over 30-series and the performance per watt improvement was crazy.
You seem about as sharp as a wooden spoon, so let me pen it out for you : at low res you are bottlenecked by cpu performance, and at high res you are bottlenecked by gpu performance. Therefore there is zero point in benchmarking cpus at high res and gpus at low res. The performance increase you see at high res with gpus will also be there at lower res once you get a faster cpu. And vice versa with cpus.
www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/12.html
Please don’t make me explain to you what cpu bottleneck is don’t do it bro.
TPU note in the conclusion :” you could run the card at 1440p[…] the only reason why you would want to do that is if you really want the lowest latency with the highest FPS”
OK this confirms you really have zero clue whatsoever. Do explain what a CPU bottleneck is, this might be fun...
A few games aren't cpu bottlenecked, which means the "test suite" gets a small increase at low res, but fact is that the vast majority of games are cpu bottlenecked at low res, making it utterly pointless to evaluate highend gpu performance from low res benchmarks.
www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/10.html
But sure, continue to argue that the 5080 will be faster than 4090... shows your amazing level of knowledge...
I'm not here to insult you - but perhaps you have a few things to learn.
Let's look a bit longer at CS2 and do some math. 3090 vs 5090 this time?
1080p
726 / 403 = 1,8014 = 180% performance win for 5090
1440p
578 / 289 = 2,0000 = 200% performance win for 5090
4K
347 / 152 = 2,2828 = 228% performance win for 5090
Neither of these GPUs struggle on this game in terms of resources, they all produce immense FPS
At the lower resolutions though, EVEN at 1440p and 4K, there is a CPU impact on the 5090, because it is leaps and bounds faster (48%!!) at 4K. I bet at 8K, you would see an even bigger gap, moving even more load onto the GPU and removing the CPU further as a limiting factor.
You see, a cpu bottleneck isn't just 'cpu too slow'... it loses a fraction of a second on every frame, and when frames are produced at such high frequencies, every millisecond matters and returns in lost GPU performance. In heavier titles, with lower FPS, this effect is less pronounced because now you've got a generally higher average time to produce a frame; a lot more leeway for CPUs to prepare data for said frame.
I don’t know if I agree with your statement not because it’s wrong it’s not but because it applies to every single generation . You will always have this scenario of top dog gpu potentially being bottlenecked by future cpu releases more so than current releases so I’m not sure if this makes sense in what we are trying to debate . The fact of the matter is there are games that can be played at lower than 4k resolutions that can benefit from a 5090 in the right conditions and that’s that that’s my point.
We can argue value , low gains etc . Now explain that to snowflake Dragam hopefully he gets it
At the same time I call things as I see them. What should I have said...
And yeah, sure you can use a 5090 to play at 1440p. Two or three years down the line that GPU will probably struggle at that res, too :) I don't really subscribe to the idea that there is a '4K card'. There's just performance that ages. OTOH, this does not make it true that a 5080 will match a 4090 just because the numbers get close at some lower resolution, which I think was the point others were trying to make ;)
I agree with gen over gen improvements though, it used to be between 30 and 50% each generation, which is exactly where the 5090 is now (at least at 4K which is what this GPU is supposed to be, at 4K GPU). Sure we've seen generations extremely impressive like 8800 GTX being 2x the 7900 GTX and even the 4090 being 70-80% faster than the 3090 and up to 2x in RT/PT but yeah, those are exceptions and should not be considered normal, mostly with the death of Moore's Law!