Friday, January 24th 2025

New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

A set of newly leaked benchmarks has revealed the performance capabilities of NVIDIA's upcoming RTX 5080 GPU. Scheduled to launch alongside the RTX 5090 on January 30, the GPU was spotted on Geekbench under OpenCL and Vulkan benchmark tests—and based on the performance, it might not make it among the best graphics cards. The tested device was an MSI-branded RTX 5080 labeled as model MS-7E62. This setup had AMD's Ryzen 7 9800X3D processor, which many consider one of the best CPUs for gaming. It also included an MSI MPG 850 Edge TI Wi-Fi motherboard and 32 GB of DDR5-6000 memory.

The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.
The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.

That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
Sources: DigitalTrends, TruthPhoenixV
Add your own comment

198 Comments on New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

#176
x4it3n
Dragam1337The driver overhead IS defacto a cpu bottleneck...
The driver overhead causes the "CPU bottleneck" but it's not like the CPU itself is not powerful/fast enough, therefore is the bottleneck. It's Nvidia's fault if their architectures & drivers are not scaling well at low resolutions. AMD do not have that issue.
watzupkenI would think that the RTX 5080 will struggle to produce any meaningful performance uplift in games when compared to the RTX 4080 Super. Consider the fact that the RTX 5090 requires 30% bump in hardware specs, i.e. CUDA cores and memory bandwidth, including a 30ish % power increase,to obtain close to 30% average rasterization performance gain, the difference between the RTX 5080 and 4080 is actually a lot smaller. So without multi frame generation, this is more like a RTX 4080 Ti.
Yup definitely. Nvidia cheaped out a lot this generation. At those prices & performance we should have had a TSMC 3nm node at least!
Posted on Reply
#177
toooooot
vekspecSo with my Suprim X 4080 that’ll only be only 9% better taking into account higher AIB than stock 4080? Wow, RTX 50 not looking good, hope this is just an outlier and not a trend :p
This is our life now
Posted on Reply
#178
turbogear
Obliviously Mr. Leder Jacket wanted to fool us by saying that 5070 is as fast as my 4090. :wtf:
Maybe some people fell for it but I didn‘t believe it when he said that at CES.:p
Posted on Reply
#179
Sah7d
Is a kind of Tik-Tok

Tik - series RTX 2000 were good
Tok - series RTX 3000 were not so good
Tik - series RTX 4000 are so much better
Tok - series RTX 5000 well, at least they are trying

I´ve been on the computer segment a lot of years (around 1995 at least) and I can say from sure that most of the time
the performance from each generation is more or less 25%, maybe a very few gens have improve a bit more than that but nehh...

I still remember my first MSI FX5200 and my ASUS ATI 9600 XT, now with more than 600W only for the GPU well... things do not look very good hehehe
Posted on Reply
#180
Dragam1337
x4it3nThe driver overhead causes the "CPU bottleneck" but it's not like the CPU itself is not powerful/fast enough, therefore is the bottleneck. It's Nvidia's fault if their architectures & drivers are not scaling well at low resolutions. AMD do not have that issue.


Yup definitely. Nvidia cheaped out a lot this generation. At those prices & performance we should have had a TSMC 3nm node at least!
Amd doesn't have that issue, cause they have dedicated hardware to handle much of it, where as nvidia does it all in software. Unless nvidia changes that (they wont) they will always have more driver overhead than amd.
Hxxbro i think you are confused or you're trolling me one or the other. Or let me know if you are just looking at outliers I'm looking again at "relative performance". Makes sense so far? Also do you understand what the word "parity" means? Heres the link im using but if you are looking at some other review then let me know.

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/34.html

5090 is 10% faster at 1080p than a 4090 not EQUAL Okay moving on
5090 is 17% faster at 1440p than a 4090 not EQUAL. Okay moving on
5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far

If you saying that say 10% at 1080p is parity well 1) that's wrong because thats not what parity means 2) realistically its a meaningless comparison between different resolutions its just another data point. most gamers play at native resolutions high refresh. just because this card scales better at 4k doesn't mean anyone playing at 1440p or below will or should not buy it.

Now explain without using big boy words what exactly are you trying to convince me of?
Dude still haven't understood why comparing below 4k is pointless...
Sah7dIs a kind of Tik-Tok

Tik - series RTX 2000 were good
Tok - series RTX 3000 were not so good
Tik - series RTX 4000 are so much better
Tok - series RTX 5000 well, at least they are trying

I´ve been on the computer segment a lot of years (around 1995 at least) and I can say from sure that most of the time
the performance from each generation is more or less 25%, maybe a very few gens have improve a bit more than that but nehh...

I still remember my first MSI FX5200 and my ASUS ATI 9600 XT, now with more than 600W only for the GPU well... things do not look very good hehehe
Lol what... 2000 was good, and 3000 not so good? Tell me you don't know what the hell you are talking about, without telling me you don't know what they hell you are talking about...
Posted on Reply
#181
Vayra86
Sah7dIs a kind of Tik-Tok

Tik - series RTX 2000 were good
Tok - series RTX 3000 were not so good
Tik - series RTX 4000 are so much better
Tok - series RTX 5000 well, at least they are trying

I´ve been on the computer segment a lot of years (around 1995 at least) and I can say from sure that most of the time
the performance from each generation is more or less 25%, maybe a very few gens have improve a bit more than that but nehh...

I still remember my first MSI FX5200 and my ASUS ATI 9600 XT, now with more than 600W only for the GPU well... things do not look very good hehehe
2000 was a pile of nothing with RT you couldn't properly use, and an overpriced 2080 & 2080ti... What's good about that? Perhaps the only saving grace was GTX 1080 performance at $350,- with the 2060. Minus 2GB, so planned obscolescence card nonetheless.
Posted on Reply
#182
Hxx
Dragam1337Dude still haven't understood why comparing below 4k is pointless...
Ah duh I forgot no one owns 1440p/1080p high refresh displays nowadays my bad . In fact If I were you I’d just email TPU to exclude those benches from their review I mean you seem convinced why should they waste their time ?
Posted on Reply
#183
Dragam1337
HxxAh duh I forgot no one owns 1440p high refresh displays nowadays. If I were you I’d just email TPU to exclude those benches from their review I mean you seem convinced why should they waste their time ?
Gpus should be tested at high res, just like cpus should be tested at low res - can your immense intellect come to the conclusion why that might be ?
Posted on Reply
#184
Hxx
Dragam1337Gpus should be tested at high res, just like cpus should be tested at low res - can your immense intellect come to the conclusion why that might be ?
Well I appreciate the compliment Thanks !! What you think “should” be done doesn’t make it a reality , pretty cool concept right ?

Second mental exercise for you - what’s high res ? And if it’s 4k then why are there gains at 1440p? Double digit gains I might add but you seem convinced this test shouldn’t be done so I dunno curious to get your thoughts here
Vayra86Keep in mind that even at 4K, TPUs bench suite holds some older games and they just run into walls that aren't GPU walls. That's why those numbers are to be taken with a grain of salt, and its also part of the reason why you see higher gaps in tests elsewhere - smaller games suite, of more recent titles. I just look at shaders right now, and there's no way I'm seeing the 5080 bridge a near 6k shader gap with clocks.

On TPU's testing, we will see the 5090 extend its lead as the bench suite gets newer titles over time.
I wonder what displays TPU is using 4k is probably 240hz but what about 1440p/1080p ? What’s the max nowadays in the consumer space - 500+ hz? I wonder how big the delta would be at lower res high refresh between the cards especially on those pro titles
Posted on Reply
#185
Sir Beregond
Sah7dIs a kind of Tik-Tok

Tik - series RTX 2000 were good
Tok - series RTX 3000 were not so good
Tik - series RTX 4000 are so much better
Tok - series RTX 5000 well, at least they are trying

I´ve been on the computer segment a lot of years (around 1995 at least) and I can say from sure that most of the time
the performance from each generation is more or less 25%, maybe a very few gens have improve a bit more than that but nehh...

I still remember my first MSI FX5200 and my ASUS ATI 9600 XT, now with more than 600W only for the GPU well... things do not look very good hehehe
Lol...RTX 20-series was good? I don't think so. It had marginal uplift vs Pascal, giant chips due to no gain in transistor density going to a half node of 16nm, and cost more with the addition of Tensor and RT cores. If anything, RTX 50-series looks like a repeat of 20-series.

30-series was on inferior tech (Samsung 8nm vs TSMC 7nm), but it was in fact a generational gain in performance over both Pascal and Turing. The only things that really sucked with the 30-series was the availability due to a multitude of factors (ethereum mining, pandemic, scalping, etc.), and the power draw / transient spikes.

40-series, pricing sucked, but it was certainly a huge improvement in efficiency over 30-series and the performance per watt improvement was crazy.
Posted on Reply
#186
Dragam1337
Sir BeregondLol...RTX 20-series was good? I don't think so. It had marginal uplift vs Pascal, giant chips due to no gain in transistor density going to a half node of 16nm, and cost more with the addition of Tensor and RT cores. If anything, RTX 50-series looks like a repeat of 20-series.

30-series was on inferior tech (Samsung 8nm vs TSMC 7nm), but it was in fact a generational gain in performance over both Pascal and Turing. The only things that really sucked with the 30-series was the availability due to a multitude of factors (ethereum mining, pandemic, scalping, etc.), and the power draw / transient spikes.

40-series, pricing sucked, but it was certainly a huge improvement in efficiency over 30-series and the performance per watt improvement was crazy.
Indeed, the 50 series looks exactly like the 20 series, which was also a SKIP gen.
HxxWell I appreciate the compliment Thanks !! What you think “should” be done doesn’t make it a reality , pretty cool concept right ?

Second mental exercise for you - what’s high res ? And if it’s 4k then why are there gains at 1440p? Double digit gains I might add but you seem convinced this test shouldn’t be done so I dunno curious to get your thoughts here


I wonder what displays TPU is using 4k is probably 240hz but what about 1440p/1080p ? What’s the max nowadays in the consumer space - 500+ hz? I wonder how big the delta would be at lower res high refresh between the cards especially on those pro titles
Lol /SWOOOSH

You seem about as sharp as a wooden spoon, so let me pen it out for you : at low res you are bottlenecked by cpu performance, and at high res you are bottlenecked by gpu performance. Therefore there is zero point in benchmarking cpus at high res and gpus at low res. The performance increase you see at high res with gpus will also be there at lower res once you get a faster cpu. And vice versa with cpus.
Posted on Reply
#187
Hxx
Dragam1337at low res you are bottlenecked by cpu performance
Nope lol at least not across the test suite used by TPU … for example

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/12.html

Please don’t make me explain to you what cpu bottleneck is don’t do it bro.

TPU note in the conclusion :” you could run the card at 1440p[…] the only reason why you would want to do that is if you really want the lowest latency with the highest FPS”
Posted on Reply
#188
Vayra86
HxxI wonder what displays TPU is using 4k is probably 240hz but what about 1440p/1080p ? What’s the max nowadays in the consumer space - 500+ hz? I wonder how big the delta would be at lower res high refresh between the cards especially on those pro titles
Its completely irrelevant? Benchmarks run at uncapped FPS... The delta is right there, in each review. Are you for real? Do you really think the monitor hard caps the framerate?
HxxNope lol at least not across the test suite used by TPU … for example

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/12.html

Please don’t make me explain to you what cpu bottleneck is don’t do it bro.

TPU note in the conclusion :” you could run the card at 1440p[…] the only reason why you would want to do that is if you really want the lowest latency with the highest FPS”
Wha....??
OK this confirms you really have zero clue whatsoever. Do explain what a CPU bottleneck is, this might be fun...
Posted on Reply
#189
Dragam1337
HxxNope lol at least not across the test suite used by TPU … for example

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/12.html

Man it would help if you understood sarcasm or that writing caps doesn’t make it any better lol or if you at least read the review wouldn’t it . Too bad you didn’t you’re just wasting our time

TPU note :” you could run the card at 1440p[…] the only reason why you would want to do that is if you really want the lowest latency with the highest FPS”
Funny, cause it seems that you didn't actually read it yourself - at the very least, you haven't understood anything.

A few games aren't cpu bottlenecked, which means the "test suite" gets a small increase at low res, but fact is that the vast majority of games are cpu bottlenecked at low res, making it utterly pointless to evaluate highend gpu performance from low res benchmarks.

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/10.html

But sure, continue to argue that the 5080 will be faster than 4090... shows your amazing level of knowledge...
Posted on Reply
#190
Hxx
Vayra86Wha....??
OK this confirms you really have zero clue whatsoever. Do explain what a CPU bottleneck is, this might be fun.
Sure I’ll entertain . Let’s look at counter strike . 1440p benches show 578fps . At 4k benches show 347 fps .if the 9800x3d used here bottlenecked this card I would expect to see the same fps or very close to 347fps at 1440p. How did I do ?
Posted on Reply
#191
Dragam1337
HxxSure I’ll entertain . Let’s look at counter strike . 1440p benches show 578fps . At 4k benches show 347 fps .if the 9800x3d used here bottlenecked this card I would expect to see the same fps or very close to 347fps at 1440p. How did I do ?
Do you have someone you can call to help you tie your shoelaces ?
Posted on Reply
#192
Hxx
Dragam1337Do you have someone you can call to help you tie your shoelaces ?
All me bro . Better than you using crocks with socks lol
Dragam1337Funny, cause it seems that you didn't actually read it yourself - at the very least, you haven't understood anything.

A few games aren't cpu bottlenecked, which means the "test suite" gets a small increase at low res, but fact is that the vast majority of games are cpu bottlenecked at low res, making it utterly pointless to evaluate highend gpu performance from low res benchmarks.

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/10.html

But sure, continue to argue that the 5080 will be faster than 4090... shows your amazing level of
lol bro move on it’s over yes bottlenecks gpus and cpus are discussed here you’re good
Posted on Reply
#193
Vayra86
HxxSure I’ll entertain . Let’s look at counter strike . 1440p benches show 578fps . At 4k benches show 347 fps .if the 9800x3d used here bottlenecked this card I would expect to see the same fps or very close to 347fps at 1440p. How did I do ?
The 9800X3D could still bottleneck this GPU fine on both resolutions even if the FPS is different. The only issue is, you can barely test it because there's hardly a faster CPU out there. But, the fact that these GPUs (4090, 5090) end up closer together on lower resolutions, indicates they are bottlenecked by something in the pipeline and you're not seeing the full grunt the faster GPU has on offer.

I'm not here to insult you - but perhaps you have a few things to learn.

Let's look a bit longer at CS2 and do some math. 3090 vs 5090 this time?

1080p
726 / 403 = 1,8014 = 180% performance win for 5090
1440p
578 / 289 = 2,0000 = 200% performance win for 5090
4K
347 / 152 = 2,2828 = 228% performance win for 5090

Neither of these GPUs struggle on this game in terms of resources, they all produce immense FPS
At the lower resolutions though, EVEN at 1440p and 4K, there is a CPU impact on the 5090, because it is leaps and bounds faster (48%!!) at 4K. I bet at 8K, you would see an even bigger gap, moving even more load onto the GPU and removing the CPU further as a limiting factor.

You see, a cpu bottleneck isn't just 'cpu too slow'... it loses a fraction of a second on every frame, and when frames are produced at such high frequencies, every millisecond matters and returns in lost GPU performance. In heavier titles, with lower FPS, this effect is less pronounced because now you've got a generally higher average time to produce a frame; a lot more leeway for CPUs to prepare data for said frame.

Posted on Reply
#194
Dragam1337
HxxAll me bro . Better than you using crocks with socks lol



lol bro move on it’s over yes bottlenecks gpus and cpus are discussed here you’re good
I suppose i couldn't expect a more intelligent answer from someone claiming that the 5080 is faster than 4090.
Vayra86The 9800X3D could still bottleneck this GPU fine on both resolutions even if the FPS is different. The only issue is, you can barely test it because there's hardly a faster CPU out there. But, the fact that these GPUs (4090, 5090) end up closer together on lower resolutions, indicates they are bottlenecked by something in the pipeline and you're not seeing the full grunt the GPU has on offer.

I'm not here to insult you - but perhaps you have a few things to learn.
"a few things to learn." that's putting it mildly.
Posted on Reply
#195
Hxx
Vayra86The 9800X3D could still bottleneck this GPU fine on both resolutions even if the FPS is different. The only issue is, you can barely test it because there's hardly a faster CPU out there. But, the fact that these GPUs (4090, 5090) end up closer together on lower resolutions, indicates they are bottlenecked by something in the pipeline and you're not seeing the full grunt the faster GPU has on offer.

I'm not here to insult you - but perhaps you have a few things to learn.
But you are insulting me you said I have no clue . That’s an insult FYI. But whatever unlike the snowflake Dragam above me I don’t get up and arms about it lmao.

I don’t know if I agree with your statement not because it’s wrong it’s not but because it applies to every single generation . You will always have this scenario of top dog gpu potentially being bottlenecked by future cpu releases more so than current releases so I’m not sure if this makes sense in what we are trying to debate . The fact of the matter is there are games that can be played at lower than 4k resolutions that can benefit from a 5090 in the right conditions and that’s that that’s my point.
We can argue value , low gains etc . Now explain that to snowflake Dragam hopefully he gets it
Posted on Reply
#196
Vayra86
HxxBut you are insulting me you said I have no clue . That’s an insult FYI. But whatever unlike the snowflake Dragram above me I don’t get up and arms about it lmao.
I don’t know if I agree with your statement not because it’s wrong but because it applies to every single generation . You will always have this scenario of top dog gpu potentially being bottlenecks by future cpu releases so I’m not sure if this makes sense in what we are trying to debate . The fact of the matter is there are games that can be played at lower than 4k resolutions that can benefit from a 5090 in the right conditions and that’s that that’s my points . Now explain that to snowflake Dagram hopefully he gets it
Look at the above example on CS2. Honestly trying to bring some insight to this otherwise pointless back and forth on 'who's right'. That's not my game, I like to educate.

At the same time I call things as I see them. What should I have said...

And yeah, sure you can use a 5090 to play at 1440p. Two or three years down the line that GPU will probably struggle at that res, too :) I don't really subscribe to the idea that there is a '4K card'. There's just performance that ages. OTOH, this does not make it true that a 5080 will match a 4090 just because the numbers get close at some lower resolution, which I think was the point others were trying to make ;)
Posted on Reply
#197
x4it3n
Sah7dTik - series RTX 2000 were good
Tok - series RTX 3000 were not so good
Tik - series RTX 4000 are so much better
Tok - series RTX 5000 well, at least they are trying

I´ve been on the computer segment a lot of years (around 1995 at least) and I can say from sure that most of the time
the performance from each generation is more or less 25%, maybe a very few gens have improve a bit more than that but nehh...
The RTX 20s were not that good compared to the GTX 10s back then... they just had DLSS which is a pretty cool feature today, but thankfully GTX 10s can use AMD's FSR to help them too. RT cores were cool but the RT performance was pretty terrible... Also the real problem of the RTX 20s was their pricing, the 1080 Ti was $700 whereas the 2080 Ti was $1000 (and even $1200 later on) when the 2080 Ti was only 30-40% but for 40% more money and DLSS was really bad too.

I agree with gen over gen improvements though, it used to be between 30 and 50% each generation, which is exactly where the 5090 is now (at least at 4K which is what this GPU is supposed to be, at 4K GPU). Sure we've seen generations extremely impressive like 8800 GTX being 2x the 7900 GTX and even the 4090 being 70-80% faster than the 3090 and up to 2x in RT/PT but yeah, those are exceptions and should not be considered normal, mostly with the death of Moore's Law!
Posted on Reply
#198
Dragam1337
x4it3nThe RTX 20s were not that good compared to the GTX 10s back then... they just had DLSS which is a pretty cool feature today, but thankfully GTX 10s can use AMD's FSR to help them too. RT cores were cool but the RT performance was pretty terrible... Also the real problem of the RTX 20s was their pricing, the 1080 Ti was $700 whereas the 2080 Ti was $1000 (and even $1200 later on) when the 2080 Ti was only 30-40% but for 40% more money and DLSS was really bad too.

I agree with gen over gen improvements though, it used to be between 30 and 50% each generation, which is exactly where the 5090 is now (at least at 4K which is what this GPU is supposed to be, at 4K GPU). Sure we've seen generations extremely impressive like 8800 GTX being 2x the 7900 GTX and even the 4090 being 70-80% faster than the 3090 and up to 2x in RT/PT but yeah, those are exceptions and should not be considered normal, mostly with the death of Moore's Law!
I agree with all your points, though 30% is on the very low end of the scale, and will (understandably) make alot of people skip the gen :)
Posted on Reply
Add your own comment
Jan 27th, 2025 14:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts