Friday, January 24th 2025

New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

A set of newly leaked benchmarks has revealed the performance capabilities of NVIDIA's upcoming RTX 5080 GPU. Scheduled to launch alongside the RTX 5090 on January 30, the GPU was spotted on Geekbench under OpenCL and Vulkan benchmark tests—and based on the performance, it might not make it among the best graphics cards. The tested device was an MSI-branded RTX 5080 labeled as model MS-7E62. This setup had AMD's Ryzen 7 9800X3D processor, which many consider one of the best CPUs for gaming. It also included an MSI MPG 850 Edge TI Wi-Fi motherboard and 32 GB of DDR5-6000 memory.

The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.
The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.

That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
Sources: DigitalTrends, TruthPhoenixV
Add your own comment

197 Comments on New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

#151
3valatzy
They have to begin to review the cards at higher than 2160p resolution. 5K, 6K are needed, but also two 4K screens side-by-side - 7680 x 2160.
Posted on Reply
#152
Krit
Bomby569you replied to a post about the *080 on a topic about the 5080, with a general comment about all nvidia cards to my comment on the 80 class card, and then you pretend like i mentioned all nvidia cards, ask a silly question, followed by whatever you want to talk despite replying to something else. It's a true work of art of brain malfunctioning
Just buy your RTX 5080 and stay quiet as most "smart" people do. RTX 4080 12GB was a joke but RTX 4080 16GB for 1200$ "still" was extremely bad value (the worst **80 series graphics card ever released even RTX 2080 was better p/p wise) Nvidia are straight up laughing in people's faces (consumers) with improvements of ~15-20% but you are questioning to me some strange questions.
Posted on Reply
#153
ilyon
Can't be: JHH assured us RTX5070 obliterates RTX4090.

YOU are FAKE NEWS.
Posted on Reply
#154
Dragam1337
HxxOh duh cause no one owns a 9800x3d and a high refresh 1440p/1080p display . Who is denial again ?
Talk about not understanding what is being said.
Posted on Reply
#155
Neo_Morpheus
For the ones blaming AMD for not competing, hence higher prices, I saw this from someone in X:

Posted on Reply
#156
Bomby569
AMD was never a solution, on price or performance.
Only crazy people that buy a new card every year care about year on year performance, seems like a stupid point to make, it's a nice upgrade with bad pricing if you're stuck on 1000, 2000 or 3000 series
Posted on Reply
#157
Prima.Vera
This is GTX 9800 vs GTX 8800 all over again.

Another 2 more years wasted....
Posted on Reply
#158
Zach_01
HisDivineOrderSo the 5080 really is just a 4080 Super Duper with Updated Framegen with no price increase.

Nvidia swears they can't add more raster because it's too hard, but not adding any raster improvement per watt is pretty silly because it lays bare the whole con. They're updating the AI hardware, not the graphics engine, and they're desperate to convince everyone that they had to do it. They couldn't be expected to improve raster meaningfully. It was just impossible.

They could. They just didn't and they didn't because they're an AI company now and not a graphics company.
It is indeed hard to add raster improvements but not impossible. You need a better node in the first place… 5nm >> 4nm and improved architecture on the “primary” compute units, more compute units and higher clocks.

The thing is that most of the crowd are forgetting or neglecting the fact that nVidia uses this very same architecture of Cuda/Tensor cores for the professional line of GPUs and compute systems for AI and Machine Learning applications. Those don’t need (or care for) more raster gaming performance.
They need desperately higher AI/ML capabilities.

It’s a unified architecture that they found a way to use the AI/ML utilities with DLSS and frame generation to be able to be useful for good old gaming.

Gamers still think that nVidia is all about gaming… but the clouds are thick and pink.
Soon for AMD to (try to) follow with UDNA.

The big money is on the industry not on $1000~2000 gaming GPUs. You don’t get to be a 3.4 Trillion company (2nd on the planet after Apple) by selling GPUs to gamers.
It’s a joke to even think about it.
Posted on Reply
#159
3valatzy
Prima.VeraThis is GTX 9800 vs GTX 8800 all over again.

Another 2 years wasted....
This time is more about the death of Moore's law which means progress will slow down to a crawl.
I am sure that during the process of backporting from TSMC 3 nm to the old 5nm process the parts lost quite a bit of shaders, and performance.
Zach_01Gamers still think that nVidia is all about gaming… but the clouds are thick and pink.
Soon for AMD to (try to) follow with UDNA.

The big money is on the industry not on $1000~2000 gaming GPUs. You don’t get to be a 3.4 Trillion company (2nd on the planet after Apple) by selling GPUs to gamers.
It’s a joke to even think about it.
If AMD and Nvidia follow this (stupid) logic, it means that the whole universe revolves around "the industry", and everything else - all other chips (CPUs, consoles, phones, refrigerators, washing machines, etc.) must die.

:kookoo:
Posted on Reply
#160
JustBenching
Neo_MorpheusFor the ones blaming AMD for not competing, hence higher prices, I saw this from someone in X:

You don't have any example of people doing the exact opposite? Like yourself? Bud please
Posted on Reply
#161
Dragam1337
Prima.VeraThis is GTX 9800 vs GTX 8800 all over again.

Another 2 years wasted....
Or 480 to 580 more recently. Just means that people on mid tier gpus will have to wait at least 5 years between upgrades to get a worthwhile upgrade.

But i do feel that nvidia are trying to force ppl into buying more expensive gpus, now that they can't get any meaningful performance upgrade with the same tier on a new gen.
Posted on Reply
#162
N/A
Prima.VeraThis is GTX 9800 vs GTX 8800 all over again.

Another 2 years wasted....
8800 GTX was a november 2006 484mm² chip. 9800 GTX in march 2008 shrink to 324mm² with 1.64X density, but what comes next in june is a 280 GTX big chip 576mm² 512 bit bus.
This would imply that in 2027 we get a 5080 like chip shrinked to 290mm² 192 bit bus as the 6070 Ti.
But what about the 5090, that's here now. you don't have to wait 2 years. and there is not going to be a massive shrink.
we're probably looking at 24576 Cuda 576mm² 6090 already maxed out. so that's 4 years wasted until they get a more advanced node and the 1.6X shrink.
Posted on Reply
#163
x4it3n
RJIn conversations with friends, I refer to the 5080 as 5075.
I would have said 4085 but yeah :p
Posted on Reply
#164
Vayra86
HxxOh duh cause no one owns a 9800x3d and a high refresh 1440p/1080p display . Who is denial again ?
What? Its not just CPU performance holding these GPUs back. Its just an engine limitation more often than not. Mate... get real. You're trying to spin a weird guess your way, it ain't happening.

Also



Now, take a long look at the near-performance parity of the 4090 versus the 5090 at 1080p, versus the 30-40% gap at 4K.

Again: time to take a breather... you haven't got a clue what you're looking at clearly.
Posted on Reply
#165
x4it3n
Vayra86It includes 1080p and 1440p where these cards get heavily bottlenecked. Stop living in denial
They get bottlenecked but it's not always due to a CPU bottleneck either. Nvidia are famous for having a terrible driver overhead too. Hence AMD GPUs sometimes performing better at 1080p and 1440p.
Posted on Reply
#166
Vayra86
x4it3nThey get bottlenecked but it's not always due to a CPU bottleneck either. Nvidia are famous for having a terrible driver overhead too. Hence AMD GPUs sometimes performing better at 1080p and 1440p.
Regardless, you're not getting or seeing the full power of a 4090 or 5090 at these resolutions, which is the initial point that seems hard to grasp for some ;)
Posted on Reply
#167
Hxx
Vayra86What? Its not just CPU performance holding these GPUs back. Its just an engine limitation more often than not. Mate... get real. You're trying to spin a weird guess your way, it ain't happening.

Also



Now, take a long look at the near-performance parity of the 4090 versus the 5090 at 1080p, versus the 30-40% gap at 4K.

Again: time to take a breather... you haven't got a clue what you're looking at clearly.
bro i think you are confused or you're trolling me one or the other. Or let me know if you are just looking at outliers I'm looking again at "relative performance". Makes sense so far? Also do you understand what the word "parity" means? Heres the link im using but if you are looking at some other review then let me know.

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/34.html

5090 is 10% faster at 1080p than a 4090 not EQUAL Okay moving on
5090 is 17% faster at 1440p than a 4090 not EQUAL. Okay moving on
5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far

If you saying that say 10% at 1080p is parity well 1) that's wrong because thats not what parity means 2) realistically its a meaningless comparison between different resolutions its just another data point. most gamers play at native resolutions high refresh. just because this card scales better at 4k doesn't mean anyone playing at 1440p or below will or should not buy it.

Now explain without using big boy words what exactly are you trying to convince me of?
Posted on Reply
#168
x4it3n
Neo_MorpheusFor the ones blaming AMD for not competing, hence higher prices, I saw this from someone in X:

Unfortunately a lot of people are like that. I have both AMD (Laptop) and Nvidia (Desktop) but as long as AMD won't beat Nvidia in Price & Performance, I think most people will not buy AMD. Maybe if Nvidia became lazy like Intel and came up with a secret "weapon" like their X3D CPUs, I don't see it happening anytime soon.
Vayra86Regardless, you're not getting or seeing the full power of a 4090 or 5090 at these resolutions, which is the initial point that seems hard to grasp for some ;)
Whoever buys those cards for 1080p and 1440p is an idiot yeah. They're definitely 4K GPUs.
Hxx5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far
To be fair most reviewers online find an average of +32% at 4K. With very demanding RT/PT games getting closer to 40%.
Posted on Reply
#169
Vayra86
Hxxbro i think you are confused or you're trolling me one or the other. Or let me know if you are just looking at outliers I'm looking again at "relative performance". Makes sense so far? Also do you understand what the word "parity" means?

www.techpowerup.com/review/nvidia-geforce-rtx-5090-founders-edition/34.html

5090 is 10% faster at 1080p than a 4090 not EQUAL Okay moving on
5090 is 17% faster at 1440p than a 4090 not EQUAL. Okay moving on
5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far

If you saying that say 10% at 1080p is parity well 1) that's wrong 2) realistically its a meaningless comparison between different resolutions its just another data point. most people play at native resolutions high refresh. just because this card scales better at 4k doesn't mean no one playing at 1440p will or should buy it.

Now explain without using big boy words what exactly are you trying to convince me of?
*near parity, which I think applies if you talk about 10-17%, no?

The original statement was, I believe, the 5080 will be as fast as a 4090. Then we start pulling in weird numbers from TPU's reviews to make that point, and I'm saying you're wrong.
Posted on Reply
#170
Zach_01
3valatzyThis time is more about the death of Moore's law which means progress will slow down to a crawl.
I am sure that during the process of backporting from TSMC 3 nm to the old 5nm process the parts lost quite a bit of shaders, and performance.



If AMD and Nvidia follow this (stupid) logic, it means that the whole universe revolves around "the industry", and everything else - all other chips (CPUs, consoles, phones, refrigerators, washing machines, etc.) must die.

:kookoo:
Very nice examples... bring apples and, not even oranges in comparison, except desktop CPUs because they already industry(server) oriented. Talking about AMD of course.
Do you think we live under a rock?

Gaming/multimedia consoles, and all kinds of devices and you mention vs a chip designer. A chip designer with top tier premium architecture for AI/ML in the dawn of the AI era. Yeah... super successful example.
You compare things as different as night and day. Your choices couldn't be more poorer. I dont expect you to find anything else though...

Keep defending nVidia by distorting the facts.
AMD already is on that train even before Ryzen architecture when Xilinx was acquired and will be on GPUs (again) with UDNA.

We are talking about chip designers here and you are bringing devices into the conversation.

Cant say about the universe but on this little Earth almost everything revolves around profit power and control and for an AI chip designer the profits are in the industry and pros.
You can try to bring Apple, which is No1 company, into this but still it will be night and day.
Lets all think how many gaming users there are that need a gaming GPU and how many people need a phone, a wearable, a PC in general.
I own 1 GPU but for 4 Apple devices and I also subscribed for services.

nVidia's market cap, revenue and earnings grow x5~10 the last couple of years because they sell x5~10 more GPUs to gamers... (my turn... :kookoo: )
Good one! Very good laughs...

Apple grows more steadily over the years because they sell more devices on users year by year? Nope.... they found another way. Feel free to search it...
Posted on Reply
#171
Hxx
Vayra86*near parity, which I think applies if you talk about 10-17%, no?

The original statement was, I believe, the 5080 will be as fast as a 4090. Then we start pulling in weird numbers from TPU's reviews to make that point, and I'm saying you're wrong.
Not how i would describe 10-17% but I am now with you bro!! 100%

Yes i pulled my crystal ball and wrote exactly that. I should have added that i meant it in "relative performance"
Basically on Monday or whenever TPU posts their review, we will look here >>>>www.techpowerup.com/gpu-specs/geforce-rtx-5080.c4217
And i would expect to see the relative performance chart showing 5080 and 4090 at parity. TBD
Posted on Reply
#172
Vayra86
HxxNot how i would describe 10-17% but I am now with you bro!! 100%

Yes i pulled my crystal ball and wrote exactly that. I should have added that i meant it in "relative performance"
Basically on Monday or whenever TPU posts their review, we will look here >>>>www.techpowerup.com/gpu-specs/geforce-rtx-5080.c4217
And i would expect to see the relative performance chart showing 5080 and 4090 at parity. TBD
Keep in mind that even at 4K, TPUs bench suite holds some older games and they just run into walls that aren't GPU walls. That's why those numbers are to be taken with a grain of salt, and its also part of the reason why you see higher gaps in tests elsewhere - smaller games suite, of more recent titles. I just look at shaders right now, and there's no way I'm seeing the 5080 bridge a near 6k shader gap with clocks.

On TPU's testing, we will see the 5090 extend its lead as the bench suite gets newer titles over time.
Posted on Reply
#173
Dragam1337
x4it3nThey get bottlenecked but it's not always due to a CPU bottleneck either. Nvidia are famous for having a terrible driver overhead too. Hence AMD GPUs sometimes performing better at 1080p and 1440p.
The driver overhead IS defacto a cpu bottleneck...
Posted on Reply
#174
vekspec
So with my Suprim X 4080 that’ll only be only 9% better taking into account higher AIB than stock 4080? Wow, RTX 50 not looking good, hope this is just an outlier and not a trend :p
Posted on Reply
#175
watzupken
I would think that the RTX 5080 will struggle to produce any meaningful performance uplift in games when compared to the RTX 4080 Super. Consider the fact that the RTX 5090 requires 30% bump in hardware specs, i.e. CUDA cores and memory bandwidth, including a 30ish % power increase,to obtain close to 30% average rasterization performance gain, the difference between the RTX 5080 and 4080 is actually a lot smaller. So without multi frame generation, this is more like a RTX 4080 Ti.
Posted on Reply
Add your own comment
Jan 27th, 2025 14:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts