Thursday, September 26th 2024

NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

Thanks to the renowned NVIDIA hardware leaker kopite7Kimi on X, we are getting information about the final versions of NVIDIA's first upcoming wave of GeForce RTX 50 series "Blackwell" graphics cards. The two leaked GPUs are the GeForce RTX 5090 and RTX 5080, which now feature a more significant gap between xx80 and xx90 SKUs. For starters, we have the highest-end GeForce RTX 5090. NVIDIA has decided to use the GB202-300-A1 die and enabled 21,760 FP32 CUDA cores on this top-end model. Accompanying the massive 170 SM GPU configuration, the RTX 5090 has 32 GB of GDDR7 memory on a 512-bit bus, with each GDDR7 die running at 28 Gbps. This translates to 1,568 GB/s memory bandwidth. All of this is confined to a 600 W TGP.

When it comes to the GeForce RTX 5080, NVIDIA has decided to further separate its xx80 and xx90 SKUs. The RTX 5080 has 10,752 FP32 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. With GDDR7 running at 28 Gbps, the memory bandwidth is also halved at 784 GB/s. This SKU uses a GB203-400-A1 die, which is designed to run within a 400 W TGP power envelope. For reference, the RTX 4090 has 68% more CUDA cores than the RTX 4080. The rumored RTX 5090 has around 102% more CUDA cores than the rumored RTX 5080, which means that NVIDIA is separating its top SKUs even more. We are curious to see at what price point NVIDIA places its upcoming GPUs so that we can compare generational updates and the difference between xx80 and xx90 models and their widened gaps.
Sources: kopite7kimi (RTX 5090), kopite7kimi (RTX 5080)
Add your own comment

185 Comments on NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

#176
pk67
igormpI can see the point of your idea, but is not something that will take place at all within the next 5 years, and may take 10 years or more to become feasible. One pretty clear example of that is PCIe, with the current version 5.0 being a major bottleneck still, version 6.0 only coming to market next year, and 7.0 having its spec finished, but still way behind the likes of NVLink (PCIe 7.0 bandwidth will be somewhere between NVLink 2.0~3.0, which were Volta/Ampere links).
I believe NVLink is the fastest in-node interconnect in use in the market at the moment, and even it is still a bottleneck compared to the actual GPU memory.
I see I have to clear one thing still.
When I'm saying soldered memory I mean soldered to PCB (and wired by pcb tracks) not die to die soldering, direct bonding or any form of advanced packaging.
I think we are bit closer to agrement now.
When I'm saying decoupled memory with optical interface - I mean (affordable) dynamic memory not static one.
Low latency static memory or even HBM memory are quite different categories for the sake of (high) costs per bit.

I'm sure in 5 years timeframe decoupled memory will be competitive to GDDR7 soldered to pcb. ( GDDR7 as chiplets is quite different story ).
But of course I can be wrong and few more years we will have to waite for this fundamental changes on market.
But even if I'm wrong it still have minor impact on validity of my conclusion - at that fundamentally changed market today 5090 with their soldered GDDR7 ram will looks like a toy. That is my point.
Posted on Reply
#177
igormp
pk67But even if I'm wrong it still have minor impact on validity of my conclusion - at that fundamentally changed market today 5090 with their soldered GDDR7 ram will looks like a toy. That is my point.
By then a 5090 will (hopefully) look like a toy no matter if your idea came to be or not, given enough technology advancements.

If a 5090 is still able to be competitive with the status quo 5+ years from now, something wrong happened along the way.
Posted on Reply
#178
pk67
igormpBy then a 5090 will (hopefully) look like a toy no matter if your idea came to be or not, given enough technology advancements.

If a 5090 is still able to be competitive with the status quo 5+ years from now, something wrong happened along the way.
Keep in mind Jensen and his marketing department telling us otherwise. They are trying to convince mainstream users ( and their investors as well ) cos the Moore law is dead the progress must slow down substantially and everything they are offering us must be extraordinary expensive.
But it is totally false picture.
The similiar picture were painted not so far ago in space industry - access to orbit must be expensive. But Musk show us otherwise.

edit
There are more factors than pure Moore law which keeping progress at fast pace now like arms race, US -China rivalry, etc
So goverments trying to stimulate their high-tech to stimulate their expansion plans and pace of progress as well.
Marketing departments trying to fool us in every possible way but we should be aware - what today looks like a bargain it wont be after a year or two so we should be more carefull which way we are spending our money cos future bargains coming to us (despite mainstream media outlets are mostly silent )- like decoupled memories - so we should be a bit more patient.
Posted on Reply
#179
Hankieroseman
Somebody needs to make a card to run Samsung's LS57CG952... MONITOR @ 7680x2160, 240 Hz and DP 2.1. No?
Posted on Reply
#180
x4it3n
pk67Keep in mind Jensen and his marketing department telling us otherwise. They are trying to convince mainstream users ( and their investors as well ) cos the Moore law is dead the progress must slow down substantially and everything they are offering us must be extraordinary expensive.
But it is totally false picture.
The similiar picture were painted not so far ago in space industry - access to orbit must be expensive. But Musk show us otherwise.

edit
There are more factors than pure Moore law which keeping progress at fast pace now like arms race, US -China rivalry, etc
So goverments trying to stimulate their high-tech to stimulate their expansion plans and pace of progress as well.
Marketing departments trying to fool us in every possible way but we should be aware - what today looks like a bargain it wont be after a year or two so we should be more carefull which way we are spending our money cos future bargains coming to us (despite mainstream media outlets are mostly silent )- like decoupled memories - so we should be a bit more patient.
Yeah Nvidia are definitely amazing at Marketing...same as Apple! They make people believe whatever they say!
I have a 4090 because I play at 4K but when I see how it struggles with Next-Gen games at 4K already I don't even want to know how badly it will age! Ray Tracing and mostly Path Tracing are making games too hard to run, and Developers barely optimize their games anymore, so we have to use DLSS and Frame Generation to get decent performance! What a joke...
Sure I enjoy being able to play Cyberpunk 2077, Alan Wake 2, Black Myth: Wukong, etc. with Path Tracing but without DLSS and FG the games run around 25fps at Native 4K lol.
So even if the 5090 was able to 2x performance vs 4090 it would still be below 60fps... meaning we will need to wait for the 6090 to do that, and by then games will be a lot more demanding... it's a never ending story lol.
HankierosemanSomebody needs to make a card to run Samsung's LS57CG952... MONITOR @ 7680x2160, 240 Hz and DP 2.1. No?
8K@240Hz ? Even DP 2.1 80Gbps with DSC won't be enough... We'll probably have to wait for DP 3.0 to do that lol.
But 8K@120Hz should be doable with a DP 2.1 80Gbps w/ DSC since it can do 4K@240Hz aka 8K@60Hz without DSC. You'll have to wait for the RTX 5090 and DP 2.1 port though.
igormpYou don't use toy hardware for such requirements tho. No one is trying to fine tune the actual large models in their basements, that's why the large H100 deployments are a thing.

3090s are still plently in use (heck, I have 2 myself), and A100s are still widely used 4 years after their launch.

There's no decoupled solution that provides the same bandwidth that soldered memory does, which is of utmost importance for something like LLM, which are really bandwidth-bound.


Mind providing any lead on such kind of offering? Current interconnects are the major bottlenecks in all clustered systems. Just saying "optical interface" doesn't mean much, since the current solutions are ate least one order of magnitude behind our soldered interfaces.


Something like a 5090 would fit in this. It's considered an entry level accelerator for all purposes. The term "gpu-poor" is a good example of that.

I can see the point of your idea, but is not something that will take place at all within the next 5 years, and may take 10 years or more to become feasible. One pretty clear example of that is PCIe, with the current version 5.0 being a major bottleneck still, version 6.0 only coming to market next year, and 7.0 having its spec finished, but still way behind the likes of NVLink (PCIe 7.0 bandwidth will be somewhere between NVLink 2.0~3.0, which were Volta/Ampere links).
I believe NVLink is the fastest in-node interconnect in use in the market at the moment, and even it is still a bottleneck compared to the actual GPU memory.
For Professionals yeah NVLink is a blessing compared to PCI-Express, but for Gamers even the PCIe 3.0 is not fully saturated yet...so PCIe 6.0 and 7.0 will be more useful for SSDs than GPUs.
Posted on Reply
#181
Lycanwolfen
My guess vacum cleaner fans from the Geforce GTX 5800, With a 600 to 800 watt peak power. Enough to heat your entire home for the winter.
Posted on Reply
#182
arni-gx
Today, its hard to believe it, that nvidia still want to release rtx 5080 with only 16gb vram, i think its much proper for rtx 5070 with 16gb vram, not for rtx 5080, because rtx 5080 it should be, at least, with 20gb vram.
Posted on Reply
#183
vacsati
Seems like the 5090 will be a real monster. Dont rememeber when was the last time when a top card came with 512bit memorybus.
Posted on Reply
Add your own comment
Dec 11th, 2024 22:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts