Thursday, September 5th 2024
NVIDIA GeForce RTX 5090 and RTX 5080 Reach Final Stages This Month, Chinese "D" Variant Arrives for Both SKUs
NVIDIA is on the brink of finalizing its next-generation "Blackwell" graphics cards, the GeForce RTX 5090 and RTX 5080. Sources close to BenchLife indicate that NVIDIA is targeting September for the official design specification finalization of both models. This timeline hints at a possible unveiling at CES 2025, with a market release shortly after. The RTX 5090 is rumored to boast a staggering 550 W TGP, a significant 22% increase from its predecessor, the RTX 4090. Meanwhile, the RTX 5080 is expected to draw 350 W, a more modest 9.3% bump from the current RTX 4080. Interestingly, NVIDIA appears to be developing "D" variants for both cards, which are likely tailored for the Chinese market to comply with export regulations.
Regarding raw power, the RTX 5090 is speculated to feature 24,576 CUDA cores paired with 512-bit GDDR7 memory. The RTX 5080, while less mighty, is still expected to pack a punch with 10,752 CUDA cores and 256-bit GDDR7 memory. As NVIDIA prepares to launch these powerhouses, rumors suggest the RTX 4090D may be discontinued by December 2024, paving the way for its successor. We are curious to see how the power consumption is handled and if these cards are packed efficiently within the higher power envelope. Some rumors indicate that the RTX 5090 could reach 600 watts at its peak, while RTX 5080 reaches 400 watts. However, that is just a rumor for now. As always, until NVIDIA makes an official announcement, these details should be taken with a grain of salt.
Sources:
BenchLife, via Wccftech
Regarding raw power, the RTX 5090 is speculated to feature 24,576 CUDA cores paired with 512-bit GDDR7 memory. The RTX 5080, while less mighty, is still expected to pack a punch with 10,752 CUDA cores and 256-bit GDDR7 memory. As NVIDIA prepares to launch these powerhouses, rumors suggest the RTX 4090D may be discontinued by December 2024, paving the way for its successor. We are curious to see how the power consumption is handled and if these cards are packed efficiently within the higher power envelope. Some rumors indicate that the RTX 5090 could reach 600 watts at its peak, while RTX 5080 reaches 400 watts. However, that is just a rumor for now. As always, until NVIDIA makes an official announcement, these details should be taken with a grain of salt.
88 Comments on NVIDIA GeForce RTX 5090 and RTX 5080 Reach Final Stages This Month, Chinese "D" Variant Arrives for Both SKUs
I run my 4090 power capped but I'm under no illusion that it's something most customers will do.
I doubt the 5090 will be much different, maybe 450 W.
The RTX 5090 will be able to reach those 120fps or 144fps even easier than the RTX 4090 so it will be a beast in terms of performance and efficiency. The theoretical 600W peak power consumption which is already present on some RTX 4090 custom designs (not mine... MSI Suprim X has a 520W limit) is of very little practical relevance.
Approx 50% faster than the (more expensive) 3090 Ti.
That's while using 100 W less than the 3090 Ti BTW.
I don't expect quite so much of a jump from the 4090 to the 5090 since it's not getting a major node improvement, but there's still a lot to gain from architecture, and I do get tired of "600 W" when it's closer to half that in stock gaming draw, and ~'miniscule differences that you can't notice' criticism of halo products.
4090-4080 e.g. 4080S uses the full die but 4090 isn't even close.
But you're right about locking max frame rate to max Hz of your monitor, makes for much lower input lag anyway.
In any case, I can concede that the newer cards are going to be more 'efficient' in terms of joule/unit of work, but re: the argument above it shouldn't be the responsibility of the consumer to closely attend what is actually relevant to their living expenses and their own indoors comfort: raw power draw/heat output. The further and further that GPU manufacturers push the power ceiling on their product in this slow creeping fashion the worse and worse it gets. That's still a bad thing.
In an ideal world a new arch + new node would mean a significant (if merely 'generational') bump in performance on the same power budget at stock card-for-card, not a HUGE jump in performance for a slightly less huge jump in power draw. And that would be the selling point. This isn't datacenter, with industrial chillers and 240V/100A wall outlets. This is a small indoor room on a tiny sliver of a 3-ton unit's capacity and a 120V outlet that pops if you edge above 15A total per room. The ceiling is far lower.
I remember when people would make fun of the 480 and called it a space heater/George Foreman. That thing drew 250W max. The 4070 Super is 30W below that on tech a decade newer.
I'm very much for UV, but that's because I find it interesting how low you can push the silicon before it starts to drag its feet and because I live in a place that is very hot for half the year. I shouldn't be hearing complaints from my Michigander friend about how his 5080 makes him sweat in March. I shouldn't be telling him 'kick rocks, make your computer slower'.
So would people actually buy a 5090 that got similar performance as the 4090 but at half the power say? No, IMO they'll want to see headlines reading 40% stronger and ignore power consumption. AMD is getting hammered for basically doing this with Zen 5.
I've been full AMD since polaris but I may have to switch back.
If I get into the new warhammer or something similar I feel its going to become more evident.