Thursday, September 26th 2024

NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

Thanks to the renowned NVIDIA hardware leaker kopite7Kimi on X, we are getting information about the final versions of NVIDIA's first upcoming wave of GeForce RTX 50 series "Blackwell" graphics cards. The two leaked GPUs are the GeForce RTX 5090 and RTX 5080, which now feature a more significant gap between xx80 and xx90 SKUs. For starters, we have the highest-end GeForce RTX 5090. NVIDIA has decided to use the GB202-300-A1 die and enabled 21,760 FP32 CUDA cores on this top-end model. Accompanying the massive 170 SM GPU configuration, the RTX 5090 has 32 GB of GDDR7 memory on a 512-bit bus, with each GDDR7 die running at 28 Gbps. This translates to 1,568 GB/s memory bandwidth. All of this is confined to a 600 W TGP.

When it comes to the GeForce RTX 5080, NVIDIA has decided to further separate its xx80 and xx90 SKUs. The RTX 5080 has 10,752 FP32 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. With GDDR7 running at 28 Gbps, the memory bandwidth is also halved at 784 GB/s. This SKU uses a GB203-400-A1 die, which is designed to run within a 400 W TGP power envelope. For reference, the RTX 4090 has 68% more CUDA cores than the RTX 4080. The rumored RTX 5090 has around 102% more CUDA cores than the rumored RTX 5080, which means that NVIDIA is separating its top SKUs even more. We are curious to see at what price point NVIDIA places its upcoming GPUs so that we can compare generational updates and the difference between xx80 and xx90 models and their widened gaps.
Sources: kopite7kimi (RTX 5090), kopite7kimi (RTX 5080)
Add your own comment

185 Comments on NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

#151
x4it3n
Godrilla2 slot design all in one hybrid would be atypical for a vanilla flagship and has never been done before. Although that would be the most likely design unless they somehow were able to take advantage of the new active silicon cooling but doubtful they would take such a gamble with halo flagship.

I think they will sooner or later but yeah we'll see! They avoided it until now so I'm curious what they're going to come up with!
Posted on Reply
#152
64K
RuruDid any Titan after the GK110 based ones have any special FP64 performance? Nope. ;)

They've been just glamorized halo-tier cards with (almost) full die and with full memory bandwith tand with larger VRAM amount. That's why x90 is the Titan these days, just branded for gamers.


You were faster, looks that we said the same things.
True. All of the Titans after the first one were really just gaming cards. I checked the GeForce site on every one of them as they were released and they were all marketed there as the greatest gaming card ever in the various marketing blurbs. The only one that I can't recall seeing on the GeForce site advertised as a gaming card was the Titan V but it was an unusual GPU with the specs and the first one with a single GPU priced at $3,000. The Kepler Titan Z was priced at $3,000 but it was a dual GPU. The Titan Z and the 6090 and the R9 295X2 were the end of the era of SLI/Crossfire.

The Titans had some appeal to prosumers because of the amount of VRAM. As you said the 3090 and 4090 just filled the slot that the Titan once held. You can think of the 3090 and 3090 Ti as something like the Kepler Titan and the Titan Black.
Posted on Reply
#153
x4it3n
LycanwolfenHoesntly at this point I hope this is a make or break point for Nvidia Graphic cards. In my point of view Nvidia should sell off it's Video card dept. Myabe then we might see some massive improvements. Looking at the specs so far it's just another machine learning Upscaler with DLSS. I do not want an upscaler I want a video card that can run native 4k or 8k no upscaler anything!. Nvidia used to make Graphic cards like that. Had a catchy logo too Nvidia the way it was meant to be Played. GTX vs RTX well Now it's all about the ray tracing. Funny I remember Games having ray tracing without a Video card needed. I have looked at some ray tracing converts and quite frankly the games look worse than the orginials. Why the hell would I want reflections of everything the walls the floors. If you fire a weapon in a coal mine with the lights off guess what there is no reflections black objects absorb light not reflect it.
In a way I wish they would go all-in in raw performance (Rasterization) with 4K and 8K Gaming as their target, but Ray Tracing can look amazing when done well! Metro Exodus Enhanced Edition is a perfect example, the game was rebuilt from the ground with RT and looks amazing! Path Tracing is the future and is too demanding right now, hence the need of DLSS and Frame Generation...
Posted on Reply
#154
igormp
LycanwolfenIn my point of view Nvidia should sell off it's Video card dept.
Why would they do that? The hardware that goes in your video card is the exact same that ends up as inference accelerators on the cloud, or in workstations for professionals.

They are just making products with the above consumers in mind first, and then giving something "good enough" for gamers with a high markup, that most folks will pay anyway.

Most people here can complain and say they won't buy it, but Nvidia's revenue increase for their gaming sector tells that the actual market acts the other way around.
Posted on Reply
#155
x4it3n
igormpWhy would they do that? The hardware that goes in your video card is the exact same that ends up as inference accelerators on the cloud, or in workstations for professionals.

They are just making products with the above consumers in mind first, and then giving something "good enough" for gamers with a high markup, that most folks will pay anyway.

Most people here can complain and say they won't buy it, but Nvidia's revenue increase for their gaming sector tells that the actual market acts the other way around.
Agree! People don't have to buy anything if they don't want to. Nobody is forcing them to buy!

There will always be rich and poor people but people have to choice to vote with their wallets. And to be fair if people never bought GPUs at crazy high prices during the pandemic we would not be in this situation...

Also chip manufacturing and price of components keeps rising all the time so those companies won't stop charging more anytime soon.
Posted on Reply
#156
Knight47
So when are they comming out? I got told I should wait a few months for 5070 instead of getting a 4070TiS for 843 € now. Can I get a 5070 that beats TiS around Xmas for 800 €?
Posted on Reply
#157
Krit
Decent prices will be only in spring.
Posted on Reply
#158
Knight47
KritDecent prices will be only in spring.
For 5070 too or will they wait half a year to release it like the 4070? If so I would need to wait at least one year for it. 4070 TiS is almost all time low price from 950€ to 843€ on alza.de
Posted on Reply
#159
Krit
Knight47For 5070 too or will they wait half a year to release it like the 4070? If so I would need to wait at least one year for it. 4070 TiS is almost all time low price from 950€ to 843€ on alza.de
First will be RTX 5090 and RTX 5080 about midrange RTX 5070 it’s hard to say when it will come out. You will need to wait at least 8 months from now to get a deal.
Posted on Reply
#160
boomheadshot8
I hope the die is large it will improove cooling
But i'm sure not
Posted on Reply
#161
Knight47
KritFirst will be RTX 5090 and RTX 5080 about midrange RTX 5070 it’s hard to say when it will come out. You will need to wait at least 8 months from now to get a deal.
I aint got time to wait almost a year when my new 2k 200Hz monitor is on the way.
Posted on Reply
#162
Krit
Knight47I aint got time to wait almost a year when my new 2k 200Hz monitor is on the way.
Look at used RTX 4070Ti or RX 7900 XT. It's rare to find one but maybe you get lucky. Buying new gpu at the end of it's life before new ones come out is not great idea.
Posted on Reply
#163
N/A
KritLook at used RTX 4070Ti or RX 7900 XT. It's rare to find one but maybe you get lucky. Buying new gpu at the end of it's life before new ones come out is not great idea.
You won't get a good deal on this joke with a 12GB card. plus there's a 4070 Super new that does the same.
Knight47So when are they comming out? I got told I should wait a few months for 5070 instead of getting a 4070TiS for 843 € now. Can I get a 5070 that beats TiS around Xmas for 800 €?
Are you sure the 5070 can be 12GB. If you must I still suggest getting a 4080 Super for €999 and enjoy the full 64MB L2$ instead.
Although the 5080 might cost the same and have all the new DLSS/FG and RT goodies that would work better and a revised more efficient thread engine.
Posted on Reply
#164
64K
Well,
Knight47I aint got time to wait almost a year when my new 2k 200Hz monitor is on the way.
The rumor is the 5090 and 5080 will at least be officially announced at CES in early January. They may get released then too but no one knows right now. My guess is the 5070 and 5060 will be a few months after that but that's just another guess.

There is also the possibility that the Blackwells will not be in supply enough to meet demand which leads to retailer price gouging as always. My plan is to wait about a year from now to pick up a 5090 after the dust settles so it was worth it to me to pick up a 4070 Super to hold me over until then. My 2070 Super, like yours, wasn't really cutting it at 1440p anymore and I made matters worse by deciding to give 4K a try which is great btw. All I have to do is put off playing any games that are GPU intensive for about a year.

If you don't want to wait on the 5070 you don't have to. You can pick up a 4070 Super and sell it later but as you know that decision will cost you a few hundred dollars.
Posted on Reply
#165
pk67
GodrillaAlthough that would be the most likely design unless they somehow were able to take advantage of the new active silicon cooling but doubtful they would take such a gamble with halo flagship.

Interesting video but it is innowative cooling for medium power mobile not for power hungry stationary GPU. 5W of removed heat per 1W of power supply is not an impressive ratio. Scaling it up to 400 W GPU it would need 80 W power to supply this kind of cooler alone.
Posted on Reply
#166
x4it3n
64KWell,


The rumor is the 5090 and 5080 will at least be officially announced at CES in early January. They may get released then too but no one knows right now. My guess is the 5070 and 5060 will be a few months after that but that's just another guess.

There is also the possibility that the Blackwells will not be in supply enough to meet demand which leads to retailer price gouging as always. My plan is to wait about a year from now to pick up a 5090 after the dust settles so it was worth it to me to pick up a 4070 Super to hold me over until then. My 2070 Super, like yours, wasn't really cutting it at 1440p anymore and I made matters worse by deciding to give 4K a try which is great btw. All I have to do is put off playing any games that are GPU intensive for about a year.

If you don't want to wait on the 5070 you don't have to. You can pick up a 4070 Super and sell it later but as you know that decision will cost you a few hundred dollars.
If it's like Lovelace then the 5090 is not going to drop in price, more like the opposite... Also the 5090 might be sold $2000 so I hope you're saving already.
Posted on Reply
#167
mxthunder
Probably going to sit this generation out unless they happen to be really great cards. Even then probably only buy 1 5090 or 5080 and waterfall upgrade all my rigs. all the PCs in my house have enough GPU power for now except my 4K setups.
Posted on Reply
#168
Dimitriman
Nobody but the most avid overclocker will buy the 5090, not because it will 100% cost >= 2500$, but because it will break most limits of weight, size and power draw of the majority of systems. 600W GPU is something meant for data centers, not home computers.
64KGood lord. Stop with the hysteria. There will be no huge uprising against Nvidia. The days of PC gaming will only be affordable to the Warren Buffetts of the world are not coming either. PC gaming has gotten more expensive even factoring in inflation but it's not time for panic. We've weathered several mining crazes and a pandemic that bred scalpers causing absurd prices and we will survive the AI craze as well. Nvidia isn't going to price themselves out of the gaming market. Huang is not a damn fool. We are talking billions and billions of dollars for them. Yes, the AI market is many times more but it's not the entire market. AMD will still be making entry level and midrange GPUs and there's even a possibility that Intel will survive in the dGPU market. Software will continue to improve. You don't have to buy a 5080 or 5090 to still have a great experience with PC gaming unless you are one of the 4% gaming at 4K but the other 96% will be fine.

Hell, even the argument that the dGPU prices are driving the masses to consoles is questionable. From what I've heard the PS5 Pro is $700 and the next gen consoles may even be more. Gaming is getting more expensive.
But Nvidia's 5080 and 5090 class GPUs enjoy a monopoly segument therefore they can price it 50% more than previous gen and they will still not be out of the market. They obviously know the price elasticity of these products and knowing Jensen's leadership style very well as we all do, he will absolutely price these two at the very limit of that curve. 2500$ for the 5090 and 1500$ for 5080 sound very "reasonable" to me in a monopolistic scenario.
Posted on Reply
#169
igormp
DimitrimanNobody but the most avid overclocker will buy the 5090, not because it will 100% cost >= 2500$, but because it will break most limits of weight, size and power draw of the majority of systems. 600W GPU is something meant for data centers, not home computers.
With 32GB of GDDR7 it's going to be a best for machine learning, while being way cheaper than the enterprise offerings.
Just power limit it to a reasonable value (~300W or so) and you're golden. Rumors still point out to it being a dual-slot GPU, idk how, but it'd be nice nonetheless.

Reminder that the 4090 also supports 600W configs, but most people don't use it like so. Saying that the 5090 will be a 600W GPU doesn't mean that it'll draw 600W all time.
Posted on Reply
#170
pk67
It should be good card for demaning RT games or modeling but with very narrow window for machine learning. For the later we will see soon much more energy effective alternatives for deep neural networks or even radically different alternatives like Kolmogorov-Arnold Networks.
But for a year or two it would be best available solution from the shelf - depend on scalpers demand.

edit
@igormp
edit2
In 5-6 years timeframe all soldered memory GPUs will be entry level class (focused on games) or obsolete imho.
Fiber connected memory large and modular banks will come to the mainstream at edge for machine learning tasks - its my prediction.
so 5090 with 32 or 48 GB dont exited me .
Posted on Reply
#171
igormp
pk67For the later we will see soon much more energy effective alternatives for deep neural networks
Doubt. There are many other alternatives in place, but the availability and the software stack for those is a pain.
Even ROCm is still a pain to use on AMD GPUs.
Nvidia will likely still have the reign on that area for the next 5 years without issues.
pk67Kolmogorov-Arnold Networks
While the idea is nice, KANs are mathematically equivalent to your good old MLPs, and can also be run on GPUs.
I get your idea with "radically different alternatives", but your example is not one of those haha
Posted on Reply
#172
pk67
igormpWhile the idea is nice, KANs are mathematically equivalent to your good old MLPs, and can also be run on GPUs.
I get your idea with "radically different alternatives", but your example is not one of those haha
Take my "radically different " as Look up Tables are radically different hardware blocks than Matrix Multipliers.
I dont have on my mind of outer space solutions.:p
edit
semiengineering.com/hardware-acceleration-approach-for-kan-via-algorithm-hardware-co-design/
^ link to KAN implementation article ^
Nvidia will likely still have the reign on that area for the next 5 years without issues.
You are definitely overestimating AI bubble timeframe imho.
Yes they could reign but with quite new much energy efficient architectures = or they will die like dinos 65 mln years ago.
Posted on Reply
#173
igormp
pk67Take my "radically different " as Look up Tables are radically different hardware blocks than Matrix Multipliers.
I dont have on my mind of outer space solutions.:p
edit
semiengineering.com/hardware-acceleration-approach-for-kan-via-algorithm-hardware-co-design/
^ link to KAN implementation article ^
FPGAs have been used for many different designs already (so basically your idea of LUTs), but are not the most efficient parts for the current needs for most stuff.

As I said before, KANs and MLP are mathematically equivalent, so you can implement KANs in ways that are more fitting to GPUs (as in matmuls) as well, while still maintaining the same capabilities of the original idea:
arxiv.org/abs/2406.02075v2
arxiv.org/abs/2408.11200
pk67You are definitely overestimating AI bubble timeframe imho.
Yes they could reign but with quite new much energy efficient architectures = or they will die like dinos 65 mln years ago.
I could say the same for your ideas about decoupled memory, but I believe neither of us have a crystal ball, right?
Posted on Reply
#174
pk67
igormpI could say the same for your ideas about decoupled memory, but I believe neither of us have a crystal ball, right?
You dont have to have crystall ball to see at how crazy pace changes goes on. Yes maybe I'm wrong by few years but it didnt change the final result.
Let me explain my point of view in details.
If you need let say 500 GB or 1 TB to run advanced LLM on your hardware so you dont want to get soldered them to a toy like 5090 which will have very limited lifespan for the sake of pace of changes in IC industry alone.
Decoupled memory is one time spending but its lifespan is twice or triple as long as lifespan of typical GPU .
If you dont belive me just look a check for how much gpu generations gddr5 or gddr6 were coupled.
So I assume the same will be valid for decoupled memories too - they will fit for many gpu generations 3 or even 4 of them.
So if optical interface will not be prohibitely expensive they will fairly soon replace soldered memories in AI oriented advanced hardware.
Entry level accelerators still would have relatively small amount and soldered wired memories.
Posted on Reply
#175
igormp
pk67If you need let say 500 GB or 1 TB to run advanced LLM on your hardware so you dont want to get soldered them to a toy like 5090 which will have very limited lifespan for the sake of pace of changes in IC industry alone.
You don't use toy hardware for such requirements tho. No one is trying to fine tune the actual large models in their basements, that's why the large H100 deployments are a thing.

3090s are still plently in use (heck, I have 2 myself), and A100s are still widely used 4 years after their launch.
pk67Decoupled memory is one time spending but its lifespan is twice or triple as long as lifespan of typical GPU .
There's no decoupled solution that provides the same bandwidth that soldered memory does, which is of utmost importance for something like LLM, which are really bandwidth-bound.
pk67So if optical interface will not be prohibitely expensive they will fairly soon replace soldered memories in AI oriented advanced hardware.
Mind providing any lead on such kind of offering? Current interconnects are the major bottlenecks in all clustered systems. Just saying "optical interface" doesn't mean much, since the current solutions are ate least one order of magnitude behind our soldered interfaces.
pk67Entry level accelerators still would have relatively small amount and soldered wired memories.
Something like a 5090 would fit in this. It's considered an entry level accelerator for all purposes. The term "gpu-poor" is a good example of that.

I can see the point of your idea, but is not something that will take place at all within the next 5 years, and may take 10 years or more to become feasible. One pretty clear example of that is PCIe, with the current version 5.0 being a major bottleneck still, version 6.0 only coming to market next year, and 7.0 having its spec finished, but still way behind the likes of NVLink (PCIe 7.0 bandwidth will be somewhere between NVLink 2.0~3.0, which were Volta/Ampere links).
I believe NVLink is the fastest in-node interconnect in use in the market at the moment, and even it is still a bottleneck compared to the actual GPU memory.
Posted on Reply
Add your own comment
Dec 11th, 2024 22:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts