Thursday, May 9th 2024
NVIDIA Testing GeForce RTX 50 Series "Blackwell" GPU Designs Ranging from 250 W to 600 W
According to Benchlife.info insiders, NVIDIA is supposedly in the phase of testing designs with various Total Graphics Power (TGP), running from 250 Watts to 600 Watts, for its upcoming GeForce RTX 50 series Blackwell graphics cards. The company is testing designs ranging from 250 W aimed at mainstream users and a more powerful 600 W configuration tailored for enthusiast-level performance. The 250 W cooling system is expected to prioritize compactness and power efficiency, making it an appealing choice for gamers seeking a balance between capability and energy conservation. This design could prove particularly attractive for those building small form-factor rigs or AIBs looking to offer smaller cooler sizes. On the other end of the spectrum, the 600 W cooling solution is the highest TGP of the stack, which is possibly only made for testing purposes. Other SKUs with different power configurations come in between.
We witnessed NVIDIA testing a 900-watt version of the Ada Lovelace AD102 GPU SKU, which never saw the light of day, so we should take this testing phase with a grain of salt. Often, the engineering silicon is the first batch made for the enablement of software and firmware, while the final silicon is much more efficient and more optimized to use less power and align with regular TGP structures. The current highest-end SKU, the GeForce RTX 4090, uses 450-watt TGP. So, take this phase with some reservations as we wait for more information to come out.
Source:
Bechlife.info
We witnessed NVIDIA testing a 900-watt version of the Ada Lovelace AD102 GPU SKU, which never saw the light of day, so we should take this testing phase with a grain of salt. Often, the engineering silicon is the first batch made for the enablement of software and firmware, while the final silicon is much more efficient and more optimized to use less power and align with regular TGP structures. The current highest-end SKU, the GeForce RTX 4090, uses 450-watt TGP. So, take this phase with some reservations as we wait for more information to come out.
84 Comments on NVIDIA Testing GeForce RTX 50 Series "Blackwell" GPU Designs Ranging from 250 W to 600 W
We all have to wait to see what Nvidia decides to productize for the RTX 50 series. But it most certainly won't be every single configuration they have tested.
Nice card render. So they will all have hbm ?
The benefits of HBM are better harnessed in the datacenter so I assume almost all of the HBM parts will go into AI accelerators for enterprise customers. Typical consumer workloads like gaming won't exploit the full HBM performance envelope. I don't see that changing anytime soon. If a videogame is going to sell well, it needs to run decently on a wide variety of hardware including more modestly specced systems like mid-range notebook GPUs and consoles.
From a business standpoint, there is very little logic in putting HBM in some consumer cards and regular GDDR memory in others. The more sensible solution has been used before: put better specced GDDR in the high end cards and less expensive memory in the entry-level and mid-level cards.
250w Blackwell with 2x the performance of my 6750xt would be compelling. I doubt it would be anything resembling a reasonable price given how far ahead of AMD they are.
Toasters, microwave ovens, hair dryers, and more. And you've probably had them for years, maybe even decades.
The device in question needs to be carefully designed and properly used. It's the same whether it's a graphics card or a stand mixer.
That said 600W is probably nearing the practical limit for a consumer grade GPU. Combined with other PC components, display, peripherals, etc., that's using up a lot of what's available in a standard household circuit (15A @ 120V here in the USA). And it's not really wise to push the wiring and breaker in a standard household circuit to the limit for long periods of time.
Remember: These new cards will once again be built on a 5nm process just like the RTX 4000 series. It will be a slightly optimized process over "4N" but the gains from that optimization will be minimal. Any and all performance gains will have to come from the new Blackwell architecture and, if the rumors are true, from a broader 512-bit memory interface.
In fact, pure desperation to achieve performance gains, is probably the sole reason why the RTX 5090 will receive a 512-bit bus. If the RTX 5000 series would be built on 3nm, we would probably get the same 384-bit bus for saving costs and power reqs.
We can thank AI for all of that. It will be quite a disappointment in H2/2024 and early 2025. Looks like both nVidia and AMD have dedicated all of their 3nm capacities solely to AI/datacenter. I would even go so far and bet that they have made a non-public arrangement behind the curtains wrt leaving their consumer stuff on 5nm. Zen 5 will also be another 5nm CPU and RDNA 4 is also bound to be produced on 5nm.
What we will be getting in the consumer space in the next few months and into next year will most likely be pretty damn incremental and iterative. In a bit of a twist of irony, the real next big thing could actually come from Intel in the form of Panther Lake in mid or H2/2025 (if Intel 18A is at least half as good as Pat is trying to make us believe).
My dude their main competitor is supposedly not even trying high end this upcoming gen.
NVIDIA are competing with themselves, and I don't think they're going to find it difficult.
Reminder that most 4090 cards use around 450 W out of the box. People like to throw the "600 W" figure around but that's just the max power limit, and only for some cards.
Can we have active phase change heat pump loops, yet (again)?
I choose to run at this efficient speed but guess what? If I run at 3.0 GHz it's even (a little) more efficient. Maybe you could run your CPU with Turbo off at all times to maximize your efficiency but most people want the best performance out of the box. The rest of us weirdos can tune for efficiency afterwards.
And if Blackwell has better efficiency than Ada (it should) then bring on the 600W GPUs. You can tune those for efficiency and if you don't like the power draw, it sounds like Nvidia will have a 250W option for you. Which you can tune as well!
600W, lol there are PSU just with 600W available for the whole damn PC and now it's just the GPU
-48VDC, 12V converted on-card?
Just because 250W is what these people purported "seeing" today doesn't mean that there aren't chip designs that Nvidia hasn't bothered benchmarking yet.
As we know, Nvidia tends to work from the top of the stack downward from their largest and most powerful GPUs to smaller ones with less transistors.
As I said, it's not going to be an easy task. The boost from the optimized 5nm process will be minimal, the Blackwell architecture will only provide that much of a boost and the 512-Bit memory interface (if true) will contribute to a higher power envelope as well as a higher cost.
I would be very positively surprised if nVidia can manage to squeeze more than a +30% gain out of the RTX 5090 over the RTX 4090. Maybe they can in ray-tracing by slashing the rasterizing performance (even more) but overall I believe that they will be facing difficulties to make the RTX 5090 an attractive upgrade for RTX 4090 owners. The 512-Bit interface reeks of desperation to be able to advertise at least some innovation (instead of real innovation like a 3nm process).
As I said in my previous post, I'm convinced that both nVidia and AMD will be more or less half-assing their upcoming consumer generations in favor of AI. Can't really blame them either. There is billions to be made from AI/datacenter while consumer stuff is comparatively boring and limited.
They have long since moved their hardware and software top talent to AI. We consumers will have to take a backseat until the AI curve flattens.