Friday, March 25th 2022

NVIDIA GeForce RTX 4090/4080 to Feature up to 24 GB of GDDR6X Memory and 600 Watt Board Power

After the data center-oriented Hopper architecture launch, NVIDIA is slowly preparing to transition the consumer section to new, gaming-focused designs codenamed Ada Lovelace. For starters, the source claims that NVIDIA is using the upcoming GeForce RTX 3090 Ti GPU as a test run for the next-generation Ada Lovelace AD102 GPU. Thanks to the authorities over at Igor's Lab, we have some additional information about the upcoming lineup. We have a sneak peek of a few features regarding the top-end GeForce RTX 4080 and RTX 4090 GPU SKUs. According to Igor's claims, NVIDIA is testing the PCIe Gen5 power connector and wants to see how it fares with the biggest GA102 SKU - GeForce RTX 3090 Ti.

Additionally, we find that the AD102 GPU is supposed to be pin-compatible with GA102. This means that the number of pins located on GA102 is the same as what we are going to see on AD102. There are 12 places for memory modules on the AD102 reference design board, resulting in up to 24 GB of GDDR6X memory. As much as 24 voltage converters surround the GPU, NVIDIA will likely implement uP9512 SKU. It can drive eight phases, resulting in three voltage converters per phase, ensuring proper power delivery. The total board power (TBP) is likely rated at up to 600 Watts, meaning that the GPU, memory, and power delivery combined output 600 Watts of heat. Igor notes that board partners will bundle 12+4 (12VHPWR) to four 8-pin (PCIe old) converters to enable PSU compatibility.
Source: Igor's Lab
Add your own comment

107 Comments on NVIDIA GeForce RTX 4090/4080 to Feature up to 24 GB of GDDR6X Memory and 600 Watt Board Power

#51
nguyen
Vayra86Yes because we export our industry to cheap labor countries. China...

You are missing quite a bit of context to justify cognitive dissonance. Another glaring one is the idea processing power somehow helps climate? All it really serves is understanding it, while slowly ruining it.

Also, lower SKUs arent all that efficient at all, the 104 also got its share of TDP bumps. Its a strange reading of facts you have here, buddy...
Isn't your 180W GP104 get whooped by a 75W GA106 A2000?
Posted on Reply
#52
Vayra86
nguyenIsn't your 180W GP104 get whooped by a 75W GA106 A2000?
A2000?

GA106 is a 170w SKU. Not 75 ;)
So really we moved the TDP of 104 to a lower point in the stack. Again you need to twist reality to get to favorable outcomes ;) you really just provided your own counter argument.

Last i checked we are talking about the gaming stack...
Posted on Reply
#53
nguyen
Vayra86A2000?

GA106 is a 170w SKU. Not 75 ;)
So really we moved the TDP of 104 to a lower point in the stack. Again you need to twist reality to get to favorable outcomes ;) you really just provided your own counter argument.

Last i checked we are talking about the gaming stack...
Oh so you can't buy an A2000 to play game? that's just you limiting yourself from life choices that are not obvious to you, either by lack of intellect or ignorance.
I didn't have to twist anything but provide information that you clearly lack.
Posted on Reply
#54
Pumper
nguyenLatest rumor say 4090 can be 2-2.5x faster than 3090 at 4K, that would be insane for one generation leap.
Must be comparing native res 3090 vs 4090 running with DLSS in performance mode.
Posted on Reply
#55
nguyen
PumperMust be comparing native res 3090 vs 4090 running with DLSS in performance mode.
Have you tried the new DLSS v2.3.9 (download from TPU)? it looks amazing, much better than any other v2.3 (sharper and better anti-ghosting)
Posted on Reply
#56
Dux
These power consumptions are getting out of hand. Soon no high end discrete graphics card will be able to run without water cooling if this trend continues.
Posted on Reply
#57
InVasMani
600W SLI compatible just in case you don't own a microwave.
Posted on Reply
#58
Turmania
Nice to see new tech releasing soon. But I just can not justify these consumption rates. So probably 3060ti successor is around 300W? Makes no sense...
Posted on Reply
#59
Aquinus
Resident Wat-man
nguyenOh so you can't buy an A2000 to play game?
Uhhh, The A2000 is a workstation GPU. Anybody buying that with the primary intent to play games needs to have their head examined.
Posted on Reply
#60
nguyen
AquinusUhhh, The A2000 is a workstation GPU. Anybody buying that with the primary intent to play games needs to have their head examined.
You can read in the A2000 review threads that there are TPU members who want to use A2000 as a mini ITX gaming GPU
Posted on Reply
#61
Aquinus
Resident Wat-man
nguyenYou can read in the A2000 review threads that there are TPU members who want to use A2000 as a mini ITX gaming GPU
They too should have their heads examined because you're paying a premium for that workstation GPU. :laugh: I mean, if they want to blow their money on something like that, it's their decision, however it's the wrong tool for the wrong job and that's one hell of a price premium just to get a particular form factor.
Posted on Reply
#62
Assimilator
CHILDREN.

Do none of you understand how any of this works?

The reason that CPUs and GPUs are drawing more and more energy for apparently less performance is a simple function of physics.

Even until a few years ago, a node shrink generally meant that transistor density would double (180nm to 90nm to 45nm / 28nm to 14nm to 7nm) which implied that the same chip would occupy only a quarter of the physical size after a shrink. That gave you a fuckton of room to add more transistors for more functionality and performance.

But that's no longer possible because node sizes aren't halving anymore, they're dropping by a nanometer or two at a time because we're reaching the literal physical limits of what silicon can do (end of Moore's law). So die area isn't as easily available, hence die sizes have to increase.

Then there's the unavoidable consequence of smaller and smaller nodes - less physical space between the individual transistors. Which leads to electromigration and crucially, current leakage. Which means that not only do you need more energy input to overcome that leakage, you need to dedicate more of your die to current-monitoring and -controlling transistors. Again, that means you have less die area for transistors to increase functionality and performance.

And because consumers don't care about any of the above and demand more functionality and performance generation-on-generation, the problem is going to continue to get worse until the semiconductor industry transitions to a successor to silicon.
nguyenAda is just a die shrink of Ampere
No it's not.
Posted on Reply
#63
Vayra86
AnarchoPrimitivRDNA1 and 2 have been power efficient, RDNA2 is more efficient than Nvidia's 3000 series, so what are you referring to exactly?


Why so so many people have it out for MLID on here? His leaks have been more correct than not
He has a history and its a total nobody. Take a look at 'Forum Cop' ;) its a nolifing troll, nothing more, like a vast portion of the currently popular internet that makes a living out of fishing for subs and clicks.

Plus anyone with half a brain can put historical specs side by side and take a wild guess at future core counts. Its not interesting and we shouldnt quite care so much about it. 'First!' Well yay, want a cookie? None of the info really matters until stuff is on shelves and seriously reviewed anyway.
AssimilatorCHILDREN.

Do none of you understand how any of this works?

The reason that CPUs and GPUs are drawing more and more energy for apparently less performance is a simple function of physics.

Even until a few years ago, a node shrink generally meant that transistor density would double (180nm to 90nm to 45nm / 28nm to 14nm to 7nm) which implied that the same chip would occupy only a quarter of the physical size after a shrink. That gave you a fuckton of room to add more transistors for more functionality and performance.

But that's no longer possible because node sizes aren't halving anymore, they're dropping by a nanometer or two at a time because we're reaching the literal physical limits of what silicon can do (end of Moore's law). So die area isn't as easily available, hence die sizes have to increase.

Then there's the unavoidable consequence of smaller and smaller nodes - less physical space between the individual transistors. Which leads to electromigration and crucially, current leakage. Which means that not only do you need more energy input to overcome that leakage, you need to dedicate more of your die to current-monitoring and -controlling transistors. Again, that means you have less die area for transistors to increase functionality and performance.

And because consumers don't care about any of the above and demand more functionality and performance generation-on-generation, the problem is going to continue to get worse until the semiconductor industry transitions to a successor to silicon.


No it's not.
There are many ways to seek efficiency, and the chips we are getting now are a mix of bad things. GPUs are brute forcing expensive ray tracing because they eat anything without RT alive... so what we have there is just a saturated market searching for new money makers, in a day and age of diminishing returns and major climate problem... plus a market where chips are scarce.

Fundamentally, that is a display of human greed at the expense of a lot of sense that unfortunately is not common.

Adults look at the horizon and see a problem, Id say it are in fact the children still yearning for good old progress like we always had it. 'Must have new toys' for... what exactly? Progress in the eternal shape of more more more simply cannot keep lasting.
nguyenYou can read in the A2000 review threads that there are TPU members who want to use A2000 as a mini ITX gaming GPU
You must be joking. Power to them, I guess? The fact remains its a workstation GPU, its not even a full 106, and its part of a stack that isnt subject of this topic. Im not even considering price lol...

Twisting reality to make your point. 106 for gaming is a 170W GPU that you falsely compare as a 70W one, to say it beats a 1080 GP104 at 180W. Glad we have that sorted out. The reality is that 106 now uses the same power as a 2016 104. And the current 104 is in the olde 102 TDP range.
Posted on Reply
#64
Solid State Soul ( SSS )
DuxCroThese power consumptions are getting out of hand. Soon no high end discrete graphics card will be able to run without water cooling if this trend continues.
Go Go Gadget quadruple slot coolers SAAAAAAAG
Posted on Reply
#65
Athlonite
neatfeatguyI'm curious to how much this will hold true and if it does: what kind of heat it generates, how it's properly cooled and finally what kind of performance it gives....oh and also what kind of outlandish price tag they slap on it.
It'll use a 360 Rad water cooler and probably cost around $5K USD
Posted on Reply
#66
Legacy-ZA
nguyenHave you tried the new DLSS v2.3.9 (download from TPU)? it looks amazing, much better than any other v2.3 (sharper and better anti-ghosting)
I am still waiting for that nice Guru, that will make a utility for us, so we can swap between DLSS versions for the games that support them with ease. :)
Posted on Reply
#67
mechtech
chrcolukDoes America have the same energy crisis as UK? A guy mining stopped when he had a £500 month electric bill ($660).

If I ran the GPU in the UK without undervolt and no FPS cap, I think I would pay more in electric than buying the card in the first year. :)
USA don't live there but from what people I do know that live there, it varies state to state. Just like Canada varies province to province. I would not say any shortages or crisis, but electricity isn't free lol Ontario after 'shipping', fees, taxes etc. I would guess we are around 20c-25c/kw.hr average
Base electricity it tiered in time
www.hydroone.com/rates-and-billing/rates-and-charges/electricity-pricing-and-costs#TOU

Most people heat with natural gas since electricity would be 6x-10x the cost and Manitoba typically heats with electricity cause it's cheaper than natural gas.
Posted on Reply
#68
sillyconjunkie
(24) 1GB chips.. part of the power consumption figure.

Twenty-four

What we need is more cooking videos. I'm all-in on this one.

Seriously though, when are they going to stop that nonsense? I surmise a lot of down-the-road card failures will be mem related.
Posted on Reply
#69
looniam
mechtechUSA don't live there but from what people I do know that live there, it varies state to state. Just like Canada varies province to province. I would not say any shortages or crisis, but electricity isn't free lol Ontario after 'shipping', fees, taxes etc. I would guess we are around 20c-25c/kw.hr average
Base electricity it tiered in time
www.hydroone.com/rates-and-billing/rates-and-charges/electricity-pricing-and-costs#TOU

Most people heat with natural gas since electricity would be 6x-10x the cost and Manitoba typically heats with electricity cause it's cheaper than natural gas.
it can and does change from city/county to city/county even depending how deregulated the state is (since 1999 to inhibit the natural monopolies of pub utls).

seeing 25¢ for a kilowatt up north . . . . wow. its under 8¢ on this side of the lake.


granted still pay toldeo edison for distribution/equipment frees ~$60 a month.
Posted on Reply
#70
robot zombie
AssimilatorCHILDREN.

Do none of you understand how any of this works?

The reason that CPUs and GPUs are drawing more and more energy for apparently less performance is a simple function of physics.

Even until a few years ago, a node shrink generally meant that transistor density would double (180nm to 90nm to 45nm / 28nm to 14nm to 7nm) which implied that the same chip would occupy only a quarter of the physical size after a shrink. That gave you a fuckton of room to add more transistors for more functionality and performance.

But that's no longer possible because node sizes aren't halving anymore, they're dropping by a nanometer or two at a time because we're reaching the literal physical limits of what silicon can do (end of Moore's law). So die area isn't as easily available, hence die sizes have to increase.

Then there's the unavoidable consequence of smaller and smaller nodes - less physical space between the individual transistors. Which leads to electromigration and crucially, current leakage. Which means that not only do you need more energy input to overcome that leakage, you need to dedicate more of your die to current-monitoring and -controlling transistors. Again, that means you have less die area for transistors to increase functionality and performance.

And because consumers don't care about any of the above and demand more functionality and performance generation-on-generation, the problem is going to continue to get worse until the semiconductor industry transitions to a successor to silicon.


No it's not.
Hmmm... I'm with you on a lot of this. Been quietly wondering where the train is going for years. Short of major breakthroughs in our understanding of applying physics and materials, the bend can only continue to drive tighter.

Maybe we shouldn't be buying GPUs every year or two to begin with though. From a consumer standpoint, I personally don't mind if there isn't something 'lastest and greatest' constantly. Can't buy it all anyway. But I do wonder what happens to all of these industries dependent on excruciatingly fast product cycles with just slightly better stuff in them each time. It's always been kind of nuts to me how many microchips we churn out just to toss in a few years. But I never saw how releasing products this way could be sustainable. It's a lot of waste, and people spend a lot of their finite financial resources on it. It doesn't seem necessary to reap the benefits of technology. To me, that's more marketing that makes people feel like they always need more, and on the other end are games promising more and more, utilizing the extra headroom. Pretty much every year, there will be some new upgrade to tempt your wallet.

That's the thing.... the pace of actual performance advancements slowed, right? But the products didn't. They came out at the same rate as always, with smaller gains, and prices that really only got higher year after year, like most other things. The more logical and (I think) fairer way is to hold a higher standard for what's considered a real gain, develop products for as long as that takes. Less waste, better products, less oversaturation in the markets and product stacks... possibly better delineation between them too. The thing is, it's impossible under the current economic model. No tech business could survive as it does today, if it was waiting until it had serious gains for each new product. Nobody is footing the bill on added operational costs. We are talking about entire industries that are almost predicated breakneck advancement... that can plan for a decade's worth of new shit, a whole roadmap of relatively small incremental improvements. What happens when we can no longer do that with the knowledge and tools that we have? What does stagnating development in silicon transistor tech look like when it reaches its final stage? People have their pet materials, but those are more challenging to work with than silicon. Those challenges could reap huge dividends in the end, but they could also just ensure that it's all ridiculously expensive for a long time. Just getting one of them to replace silicon will be expensive for everyone.

Something kind of has to give there though. Somewhere, somehow.

I kind of wonder how many things about how we operate will change in my life. I don't really see technology, or especially stuff in the realm of consumer technology ever being like it was before.
Posted on Reply
#71
Assimilator
Vayra86You must be joking. Power to them, I guess? The fact remains its a workstation GPU, its not even a full 106, and its part of a stack that isnt subject of this topic. Im not even considering price lol...
It fills an extremely specific niche - that of people who want to build really mini-ITX systems. We're talking so mini that they don't even have an extra 150W available from their PSU for an PCIe 8-pin connector, which is entirely possible with picoPSUs.

I certainly understand the desire to have a decently powerful GPU that doesn't require an external power connector, and I also understand why so many (including myself) are fascinated with the A2000 - it demonstrates that the efficiency that Maxwell brought hasn't gone anywhere, it's just that because everything has become about performance and beating the competition (and I blame consumers for this as much as anyone), the manufacturers are clocking their silicon to the limits to try to get any win. And of course that's a zero-sum game because as soon as one manufacturer does it, all of them will do it.
Posted on Reply
#72
Minus Infinity
nguyenLatest rumor say 4090 can be 2-2.5x faster than 3090 at 4K, that would be insane for one generation leap.
This why the price will be much higher, but you could get a 4070 that will still beat a 3090 at 4K in pure ratserisation and easily beat it for RT. It's said the 4060 Ti will match the 3090 for RT.

People will need to drop down a tier this gen if they don't want sticker shock, but you'll still get a lot stronger performance.
Posted on Reply
#73
InVasMani
You won't be able to get them either way. Dropping down a tier to offset the extreme price shock and power consumption rise is pretty solid advice though. If they just intro with a replacement to the RTX 3050 that would doing the world a huge favor. The re-released RTX 2060 is a joke with it's bus width for 12GB of VRAM where the core will sh*t a brick before it ever makes use of that much VRAM. You might as well call it a GTX970 because the core isn't going to cope well with 12GB VRAM usage.

In summary more expensive to buy if you aren't using it for mining like Nvidia intends it to be. It's basically within the error of margin of a 6GB version at or below 1440p within +/- 5% that's pathetic really in context. Also no SLI so there is no salvaging how bad it is for gamer's plus they tamped down on bios modding as well like it's not at all aimed at gaming except uninformed ones. The GTX 960 4GB had more upside you could SLI it and could bios mod the VRAM to run faster this card is just a joke outright.
Posted on Reply
#74
GamerGuy
oobymach600watts = 4x 8pin connectors from a psu, so you probably need a psu upgrade just to run one.
Bah, I have an Enermax MAX REVO 1500W PSU that should be able to handle this, though I ain't looking at any RTX 4000 series cards, I'd be looking at an RX 7000 series card, will see the difference in performance and price between the RX 7800 XT and the RX 7900 XT.
Posted on Reply
#75
nguyen
Legacy-ZAI am still waiting for that nice Guru, that will make a utility for us, so we can swap between DLSS versions for the games that support them with ease. :)
I'm using DLSS Swapper which does exactly what you want "swap between DLSS versions for the games that support them with ease", although it only works with Steam.
Minus InfinityThis why the price will be much higher, but you could get a 4070 that will still beat a 3090 at 4K in pure ratserisation and easily beat it for RT. It's said the 4060 Ti will match the 3090 for RT.

People will need to drop down a tier this gen if they don't want sticker shock, but you'll still get a lot stronger performance.
I guess haters like to find things to complain about, 4060Ti could very well beat 3090 @4K while using <250W, offering superior efficiency. But hey AD102 using 70% more power than GA102 while offering 100-150% more FPS means that the whole uarch is bad according to some :rolleyes:.

Meanwhile I'm just chilling with my 3090 using < 300W @ 4K RT DLSS Balanced in new AAA games, but hey DLSS is bad according to some too.
Posted on Reply
Add your own comment
Nov 25th, 2024 21:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts