Friday, March 25th 2022
NVIDIA GeForce RTX 4090/4080 to Feature up to 24 GB of GDDR6X Memory and 600 Watt Board Power
After the data center-oriented Hopper architecture launch, NVIDIA is slowly preparing to transition the consumer section to new, gaming-focused designs codenamed Ada Lovelace. For starters, the source claims that NVIDIA is using the upcoming GeForce RTX 3090 Ti GPU as a test run for the next-generation Ada Lovelace AD102 GPU. Thanks to the authorities over at Igor's Lab, we have some additional information about the upcoming lineup. We have a sneak peek of a few features regarding the top-end GeForce RTX 4080 and RTX 4090 GPU SKUs. According to Igor's claims, NVIDIA is testing the PCIe Gen5 power connector and wants to see how it fares with the biggest GA102 SKU - GeForce RTX 3090 Ti.
Additionally, we find that the AD102 GPU is supposed to be pin-compatible with GA102. This means that the number of pins located on GA102 is the same as what we are going to see on AD102. There are 12 places for memory modules on the AD102 reference design board, resulting in up to 24 GB of GDDR6X memory. As much as 24 voltage converters surround the GPU, NVIDIA will likely implement uP9512 SKU. It can drive eight phases, resulting in three voltage converters per phase, ensuring proper power delivery. The total board power (TBP) is likely rated at up to 600 Watts, meaning that the GPU, memory, and power delivery combined output 600 Watts of heat. Igor notes that board partners will bundle 12+4 (12VHPWR) to four 8-pin (PCIe old) converters to enable PSU compatibility.
Source:
Igor's Lab
Additionally, we find that the AD102 GPU is supposed to be pin-compatible with GA102. This means that the number of pins located on GA102 is the same as what we are going to see on AD102. There are 12 places for memory modules on the AD102 reference design board, resulting in up to 24 GB of GDDR6X memory. As much as 24 voltage converters surround the GPU, NVIDIA will likely implement uP9512 SKU. It can drive eight phases, resulting in three voltage converters per phase, ensuring proper power delivery. The total board power (TBP) is likely rated at up to 600 Watts, meaning that the GPU, memory, and power delivery combined output 600 Watts of heat. Igor notes that board partners will bundle 12+4 (12VHPWR) to four 8-pin (PCIe old) converters to enable PSU compatibility.
107 Comments on NVIDIA GeForce RTX 4090/4080 to Feature up to 24 GB of GDDR6X Memory and 600 Watt Board Power
Not a huge fan of the high end getting close to 600W though my 3080ti at 400w is already semi obnoxious to keep the room it's in cool as it is. Guessing they will end up actually being 450-500w cards just like custom 3090s though.
If your wanting a 4080/4090 to get the best performance possible the cost to run it likely doesn't matter to you.
People in this thread are griping about about how much power it auch down because in their head they know it's going to cost more than they'd care to pay. If it was priced sanely. The conversation would be more along the lines of the usual "efficiency, who tf cares about efficiency? If you can afford the card you can afford to power and extra light bulb, or it's only a few cents on the electric bill". And it was even staff and moderators on this site saying things like that.
hopefully AMD can do it with 400w so i can switch back.
damn, we really need a private nuclear silo right in front of our house.
This is also going to be something stupid like a 600W capable power connector, not that the GPU's use it
Btw, it is just a rumor. I'm gonna make popcorn sit tight and see how many dead bodies fall from the closet :)
However why would one need to make it 600 watt capable if not to at least kinda approach it.
it is getting too high and I said it before, its barely technological progress if yes we can do more but it also costs more energy to achieve.
Sure the performance per watt might go up....but clearly not enough if the wattage has to scale up like this constantly
Now that the gain from a node shrink is diminishing, suddenly efficiency is out the window and we ride the marketing wave to convince ourselves we're progressing, even though there was never a day and age where hardware could last longer than today, simply because we're down to baby steps at best.
The reality is, we're not progressing, we're degressing when GPUs need substantially more power to produce playable frames in gaming. What's underneath that, is really not quite relevant. The big picture is that in a world where efficiency is key to survive (literally), we're buying into products that counter that idea. It won't last, it can't last, and the deeper we go down that rabbit hole, the bigger the hurdle to get out of it again. RT is diving straight into that hole. DLSS/FSR are clawing us back out a little bit, but the net result is still that we're in need of more power to drive content.
Its not progress. What is progress, is the fact that GPUs keep going faster. Sure. Another bit of progress is that you can still get or create an efficient GPU. I can understand that notion too. But the commercial reality is none of those things, as @Assimilator correctly states, because quite simply commerce is still all about more bigger faster stronger, fuck all consequences as long as we keep feeding the top 3%. The question is how far your Lemming mind wants to go before diving off the cliff.
The bottom line and fundamental question here is are you part of the 3% and if you're not, you're an idiot for feeding them further. Vote with wallet, or die - or, present a horrible future to your children.
And that principle stands in quite a few of our current commercial adventures. The responsibility to change is on us, no one else.
As for the topic, nVidia should focus on cutting down the gap between RT on and off instead of focusing on more FPS: for that matter, so should AMD and their RT version. Not that more FPS isn't a good thing: it's just that the gap is really that severe.