Friday, January 24th 2025
New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090
A set of newly leaked benchmarks has revealed the performance capabilities of NVIDIA's upcoming RTX 5080 GPU. Scheduled to launch alongside the RTX 5090 on January 30, the GPU was spotted on Geekbench under OpenCL and Vulkan benchmark tests—and based on the performance, it might not make it among the best graphics cards. The tested device was an MSI-branded RTX 5080 labeled as model MS-7E62. This setup had AMD's Ryzen 7 9800X3D processor, which many consider one of the best CPUs for gaming. It also included an MSI MPG 850 Edge TI Wi-Fi motherboard and 32 GB of DDR5-6000 memory.
The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.
That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
Sources:
DigitalTrends, TruthPhoenixV
The benchmark results show that the RTX 5080 scored 261,836 points in Vulkan and 256,138 points in OpenCL tests. Compared to the RTX 4080, its previous version, the RTX 5080 has a 22% boost in Vulkan performance and a small 6.7% gain in OpenCL. Reddit user TruthPhoenixV found that on the Blender Open Data platform, the GPU got a median score of 9,063.77. This score is 9.4% higher than the RTX 4080 and 8.2% better than the RTX 4080 Super. Even with these improvements, the RTX 5080 might not outperform the current-gen top-tier RTX 4090. In the past, NVIDIA's 80-class GPUs have beaten the 90-class GPUs from the previous generation, but these early numbers suggest this trend might not continue for the RTX 5080.The RTX 5080 uses NVIDIA's latest Blackwell architecture, with 10,752 CUDA cores spread across 84 Streaming Multiprocessors (SMs) versus the 9,728 cores in the RTX 4080. It has 16 GB of GDDR7 memory on a 256-bit bus. NVIDIA says it can deliver 1,801 TOPS in AI performance through Tensor Cores and 171 TeraFLOPS of ray tracing performance using its RT Cores.
That said, it's important to note that these benchmark results have not been fully verified so we should wait for the review embargo to lift before concluding.
176 Comments on New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090
NVIDIA GeForce RTX 4090 Specs | TechPowerUp GPU Database
The reason why the 4090 has such a poor scaling is due to Memory Bandwidth bottleneck. The 4090 has 60% more cores but only 37% more bandwidth therefore all of its core cannot be filled. Also Memory Bandwidth doesn't scale like CUDA Cores or Core clocks either so you would need more than 60% more Bandwidth (vs 4080 SUPER) to fill them! Hence the 5090 having a 512-bit bus and almost 1.8TB/s of bandwidth. But knowing that the 5090 has 33% more Cores than the 4090 it is probably Memory starved too lol
There are no graphics card companies anymore and probably never will be. So raster gets brushed aside and AI features get pushed to the forefront and Mark Cerny's just saying what's left to say to try and keep people from realizing what happened.
Or are you suggesting that had the AI boom not happened Nvidia would have given up on the graphics market? Or that the AI boom just so happened to perfectly coincide with GPU raster suddenly being incredibly difficult to meaningfully improve? What a weird coincidence, that.
1. They knew this series was crap which is why all of the BS market speak.
2. There is not a lot of them being made right now... LIMITED INITIAL RUN to keep prices up with what they have out.
The other reason is once again MARKET SEGMENTATION.
They'll crank out the SUPA and the TI later. Not a lot but juse enough to keep the market prices from falling down hard.
Cya in 2027 i guess.
I know...'boo this man'. But fair is fair.
All it really has to do is beat the maximum potential of N48 when itself is running stock...which I'm sure nVIDIA will make sure it will by .00000001% (only minor exageration) on avg.
They know that, you know that. We all know that. That's how they do it. Also, gotta make it so 9216sp/18GB/192-bit is an upgrade on 3nm and piss all the then-5080 owners off.
It's the same old song and dance.
Buy nVIDIA card or buy cheaper AMD card and overclock to the speed of the nvidia card, more-or-less. AMD even knows this, and will probably adjust their capabilties as such if possible I would imagine.
I understand some people have different points of view, but to me the answer has always been obvious.
I'll take the $500 16GB 4060Ti, the $600-700 4080 20GB, and the >4080 but couple hundred dollar less 24GB.
Yeah, you call them overclocked 7800xt/7900xt/7900xtx...whatever.
Or you can buy a good-but-expensive nVIDIA 90 card for 2+ cycles/2x margin for 2x most-any would pay and/or overclock a similar-performing/more-spendy nvidia card for no tangible resolution/fps gain.
Props if you can justify the former.
Have no doubt this gen will be similar, and in short order we prolly won't even be having the DLSS/FSR argument (for the most part).
Still just about RT. Will still argue it doesn't matter until next-gen of cards (3nm) when it coalesces.
If you want in early to pay for mostly nVIDIA-paid showpieces most people don't actually play, more power to you. You keep running through that one 2077 spot, gamer. Enjoy your 'bar' bench(mark).
BTW, does nVIDIA demand that for review samples or do people subject themselves to that willingly? If they need help, blink twice next time you talk about it. Maybe we can send The Fixer.
Ok, or maybe just get demands amended.
9070 OC probably won't beat a stock 4070 Ti Super/7900xt but be damn close and a hell of a lot cheaper.
9070XT OC will probably not quite hit a stock 5070ti, but damn close and a hell of a lot cheaper.
If there is a 9070XT(X/I) it will probably not OC to beat the stock speed of a stock 5080 but damn close and a hell of a lot cheaper.
Give or take damn close with actual parity to insignificant amounts faster. At least, that's my theory of how nVIDIA is planning for it to go down. If AMD can do it IDK, but that's certainly the hope.
IOW, replacements for the 7800xt/7900GRE, 7900xt, and 7900xtx but with less ram in the later cases (and less absolute [OC] performance) to facilitate a cheaper price.
AMD got it right the first time, but cheaper is good. I still think the 7800/7900 series will be good value wherever they settle (given the extra ram, their own OC potential, and if given FSR4 support).
I don't think anything is going to keep me from holding out for the next gen and whatever is ~96-112 ROPs.
Pair something like 6*1920sp/11520sp and 256-bit/40gbps 24GB of ram, that'd be choice; a 4090 for conceivably 1/2 price.
Nvidia might make something slightly better, but I bet the cut-down model is similar to that. Plus, knowing them they'll cut it down to 224-bit and 22GB or something.
I truly do think >10752 on 3nm will be good-nuff long time.
Unless someone wants to sell me a 4090 for that price earlier, but I don't think that's going to happen.
I like the potential AMD options this go round (in theory), and wouldn't fault anyone for buying them (especially if you upgrade every gen or think 16GB is enough for your setup for awhile)...
...but these GTX50 kinda look a joke when you think about what's coming in a ~year or the comparative value wrt raster/vram.
I still think nVIDIA only makes sense if you have to have this-gen addative RT and are willing to pay for it.
In both price AND higher-end playable performance (compared to w/o). Some people might be; this guy ain't.
I get it, 'you can wait forever' (which is a straw-man argument bc it's really about when to jump on to get the best value/longevity)...but next-gen is we'll probably see sane-priced 4090 performance and...
...>16GB vram. Even >16GB on 192-bit cards (if they exist) and 5080 perf. The fact 5080 (even if next-gen same-bracket will have slightly-higher raster performance) doesn't have 24GB is borderline criminal.
Naw. I'll wait. For high-end performance for what high-end cards USED to cost...or i'm *personally* willing to spend for a decent 4+ year card.
If I'm going to spend that much it better be at least parity PS6 in every possible way. While that system might have ~32GB of combined memory, I doubt it will need use more than 24GB for the GPU.
16? Prolly. 18? Idk.
To me this all adds up to "wait"; for 18GB 192-bit 5080-performance at the least.
For price/VRAM alone, if not stabilizing RT performance (which even if 9216sp could be higher than 5080 potential given potential clock differences/capabilities). I don't know if many people realize that.
edit: damn formatting...I thought I had it perfect. Pardon the awful line breaks as I don't feel like fixing it. I know it looks awful and is probably a pain to read as-is. Points still stand.
2nd edit: Cleaned it up a little anyway because I'm anal like that.
protest ? what exactly ? the fact that you have ants in your pants ?
Me personally maximum CUDA and VRAM which makes 5090 obvious jumping from 3090 for 2.5x rendering uplift/card. But we'll see still plenty of optimizations when it comes to compute in next 4-6 months so it may be even nicer. For gaming surely nobody needs 5090.
The biggest thing this gen was the new DLSS, the performance mode old vs new is huge. I legitimately can actually game on performance mode, it doesn't look totally crap like before. Not saying that it looks great, but i think it's better than playing at 1440p native on a 1440p monitor. Well 50 series is a disappointment, amd isn't.
I haven't tried the new Tramformer model but personally only like quality DLSS even at 4k or 1440p UW DLAA. Hoping the new version makes balanced 4k usable for me.
See, a key factor in business is make the other side think they've won...and you can imagine how AIB's may have thought that by getting better bins/allocation after initial FE releases.
Can't you see Huang telling AIB's in the beginning "This is a novelty, we're not going to compete against you..."..... .... *under breath*...."right now." Fast forward to today.
They were originally sold as just that; a novelty. The more you spin, the more you margin. Novelty wears off; undercut
partnerscompetition bc you can. Shrewd, but makes sense. Literal cents.Isn't it also fascinating it's not only the cheapest, but also clearly a metric shitload of design effort was put in to make sure it's a two-slot card their 'partners' couldn't compete with in size (right now)?
I laugh when people still haven't figured out how Huang does business. Not talking about you, I just mean in general. This is the guy that cleverlyused his CES speech to get cheaper memory pricing.
AMD likely would've said "We don't use Micron ram because it's literal comparative shit. They can't even make 20gbps GDDR6 and needed pam4 to make it which is fucking bananas."
nVIDIA be like "Samsung/Hynix be cheaper, don't care if not shit...(actually we do, but we'll literally use/promote Micron if you don't)." BOLO 36gbps ram on next-gen nVIDIA cards when AMD 40gbps...
You think that larger L2 is a design reason? Yeah, it is. The design reason is because they can buy cheaper ram from Micron that doesn't perform as well...by the difference of the cache between AMD.
(Sorry that Hynix sale sheet loads so slow, they usually keep that shit pretty well-hidden now-a-days, so that's the best I can do.)
The man is a literal genius, he truly is. There's so many things that go on that people don't notice wrt their choices people think are coincidence, happen-stance, or 'slips of the tounge'.
EVERYTHING he/they does/do is planned wayyy in advance. Sometimes years, with plans to use as leverage or acclimate the market to their what's financially positive for them. Just like RT, etc.
I could go on for literal pages about how many things he does which almost nobody appears to notice. Sometimes people complain about them, but he's taught people to think it's a 'joke'. Genius.
FWIW, The value in 1080Ti/2080Ti was on the used market because of the cost principle you see. The price dropped SUBSTANTIALLY pretty quick and were the best deal for a good, long while.
For instance 2080Ti looked bad vs 3080 bc that's how nvidia do but given 8nm was shit and 12(16)nm was not, and the overclocking difference was massive...the performance difference was neglibable.
Pretty well-kept secret, I think, but dare to compare (even look at Wizard OC scores for both cards). IMHO it did lead to the idea of the only really good value for nVIDIA cards:
1. Buy 90-class new (or late-releasing Ti back in the day) on new node...which are absurdly overpriced but specced over the rez/fps cutoffs at maxed settings in most games vs their counterparts.
2. Buy 90/Ti used when next 80 comes out for cheap bc it looks bad in reviews. That 90 will still perform often at the same perf tier as that 80 and perform better than the new AMD competition for less.
3. This excludes 3000 series bc it's kinda shit. Not trying to be mean to those owners...It's just that Samsung's node sucked and the scaling doesn't line up with everything else before or since.
Won't hold true this gen, but will for 3nm vs 4090. That said, 4090 will have lasted the original owners two generations instead of one. In total they could last a very long time simply bc node/ram.
Huang is right in the regard of "the more you buy the more save" in the notion that upper Ti/90 cards truly are set up to last an extra gen, hence the upcharge.
When you own a market, this happens. They can expect and/or seed and/or pay devs to target whatever they want to seperate the performance tiers of their cards, and they do. Sometimes by 1fps.
There's a reason why the 2080ti is still on the review graph and still averages 1440p60 according to W1z's suite. They want to hammer that point home about the 'highest-end longevity' to excuse price.
5090 is wonky simply bc it's simply limited by power; looks very back-ported from 3nm. Weird card. Cool to see the best performance possible on 4nm though (factoring in higher clock for yields).
Don't get me wrong, I'd never buy one new...but I see the appeal for those that want the best or see it as a true investment on a new node. 1080Ti took forever to die, same for 2080Ti.
4090 probably the same. It's not by accident. Neither is the fact they know you won't upgrade for another generation, hence the added margin of that lost sale factored into price.
It's stuff like this that makes me recommend AMD.
What nVIDIA does wrt marketing/planning, it truly does work. But it's dirty bc so much of it is limited/false innovation to spur the largest margins...It's unnatural how products seperated.
(and obsoleted to the connoisseur...outside a '90'.)
I loved reccomending the 2080ti to people when it was a steal, but I don't know if that'll happen with 4090.
6800xt are cheap and damn good for what they are (if you overclock them they're VERY similar to 7800xt OC; good-nuff for many; essentially a stock 4070Ti with 16GB in raster).
Also, 9070 vanilla will probably vicariously cause those same cards to be even more cheap before-long, so that's probably the new value basement, without a doubt. Not bad for ~/<$400!
;) You can stop thinking. Just apply logic and count the shaders. Its not rocket science. They simply keep the 4090 relevant and by doing so, sell the illusion the card holds value and imply the x90 are very cost effective. Remember... the more you buy the more you save. This has rang true for the high end which should be read as 'more'. The midrange is a huge clusterfuck now, complete stagnation, its just a replacement for anything that broke.
Considering there is no real difference between Ada and Blackwell besides cool powerpoint slides selling half truths, the gpu stack is populated adequately with a 1k, 1,5k and 2k $ card.
ASML’s new machines are too expensive: TSMC - Taipei Times