Monday, September 5th 2022

NVIDIA GeForce RTX 4080 Comes in 12GB and 16GB Variants

NVIDIA's upcoming GeForce RTX 4080 "Ada," a successor to the RTX 3080 "Ampere," reportedly comes in two distinct variants based on memory size, memory bus width, and possibly even core-configuration. MEGAsizeGPU reports that they have seen two reference designs for the RTX 4080, one with 12 GB of memory and a 10-layer PCB, and the other with 16 GB of memory and a 12-layer PCB. Increasing numbers of PCB layers enable greater density of wiring around the ASIC. At debut, the flagship product from NVIDIA is expected to be the RTX 4090, with its 24 GB memory size, and 14-layer PCB. Apparently, the 12 GB and 16 GB variants of the RTX 4080 feature vastly different PCB designs.

We've known from past attempts at memory-based variants, such as the GTX 1060 (3 GB vs. 6 GB), or the more recent RTX 3080 (10 GB vs. 12 GB), that NVIDIA turns to other levers to differentiate variants, such as core-configuration (numbers of available CUDA cores), and the same is highly likely with the RTX 4080. The RTX 4080 12 GB, RTX 4080 16 GB, and the RTX 4090, could be NVIDIA's answers to AMD's RDNA3-based successors of the RX 6800, RX 6800 XT, and RX 6950 XT, respectively.
Sources: Wccftech, MEGAsizeGPU (Twitter)
Add your own comment

61 Comments on NVIDIA GeForce RTX 4080 Comes in 12GB and 16GB Variants

#26
Tigerfox
This rumor is technical bullshit.

To have 12GB, a card would have to have either a 192-Bit or a 384-Bit memory interface, since NV won't use unequal configurations anymore and half-sized VRAM-modules (e.g. 1,5GB) never became a thing. A RTX 4080 with only 192-Bit would be crap, while for 384-Bit it would have to use AD102.
I can only imagine one of the two SKUs interpreted as a 4080 12GB here ist either intended to be a 4070/Ti with AD104 capped at 192-Bit, or a 4080Ti with AD102.
Posted on Reply
#27
trog100
if the new generation of cards hit the market at cheaper prices it will be for one simple reason.. nobody has the money to buy one.. :)

trog
Posted on Reply
#28
Pumper
Let me guess, 16GB variant will be 50% more expensive.
Posted on Reply
#29
Space Lynx
Astronaut
erockerMSRP I see half that.
Nvidia will use inflation as an excuse. I fully expect a 4080 12gb to cost around $799 and the 16gb to cost $899
Posted on Reply
#30
ppn
When they did that in the past used to be named GTX 660 Ti and GTX 680 for example, one 192 bit and 256 bit based on the same die. except they tricked it to still hold 2GB despite being 192 bit with asymmetric setup. I guess the idea is 4080 12GB to replace 3080 Ti at 699.
Posted on Reply
#31
Tigerfox
They stayed away from asymmetric memory setup since the infamous GTX 970 and I don't expect them to reassume this bad habit on the 4080 in particular. Plus, there would be a huge performance difference between 192Bit with 12GB and 256Bit with 16GB.
Posted on Reply
#32
Chomiq
birdieQuick poll missing option: I'm not buying such expensive GPUs.
What past 2 years have shown us is that people will still buy.
Posted on Reply
#33
Ruru
S.T.A.R.S.
I suppose that the 12GB model will have a narrower memory bus, unless they use mixed density chips (like with 550 Ti, 660 and 660 Ti). Or then the 12GB model will have a 192-bit bus.
TigerfoxThey stayed away from asymmetric memory setup since the infamous GTX 970 and I don't expect them to reassume this bad habit on the 4080 in particular. Plus, there would be a huge performance difference between 192Bit with 12GB and 256Bit with 16GB.
970 didn't use mixed density chips, it was due to the chip architecture why they had to use that 224bit + 32bit bus on that.
Posted on Reply
#34
lexluthermiester
ZoneDymothis seems like a mistake...12gb for the future with RT does not seem sufficient
That depends on settings and resolution.
Posted on Reply
#35
ppn
The whole idea of RT was not to use any memory, because it's all made out of Ray tracing magic, just rays coordinates as opposed to heavy textures. why the need for extra VRAM.
Posted on Reply
#36
Denver
64KRealistically you will need to double that at least.
You need to multiply by 3, no joke.
Nvidia has every reason to release everything overpriced.

- Massive chips.
- More expensive process, at least 2X more expensive.
- Overstock of current generation GPUs.

4090 = U$ 1999
4080 = U$ 1499
Posted on Reply
#37
Vayra86
TigerfoxThis rumor is technical bullshit.

To have 12GB, a card would have to have either a 192-Bit or a 384-Bit memory interface, since NV won't use unequal configurations anymore and half-sized VRAM-modules (e.g. 1,5GB) never became a thing. A RTX 4080 with only 192-Bit would be crap, while for 384-Bit it would have to use AD102.
I can only imagine one of the two SKUs interpreted as a 4080 12GB here ist either intended to be a 4070/Ti with AD104 capped at 192-Bit, or a 4080Ti with AD102.
Weren't they going to use some magical cache now?
Posted on Reply
#38
Selaya
Vayra86[ ... ]

Euhhh that makes absolutely not a single drop of sense at all. 32GB and 16GB GPU is for gaming also complete and utter nonsense right now. You can't get em full even if you try.

12GB GPU, sure, 16GB RAM, sure. That's where its at right now and for this coming console gen, at best. 32 is only needed if you do more and its not gaming.
there's plenty of games (especially the moddable ones) where you'll easily exhaust even 32gib of memory. if i were to make a build today, i would put at least 32gib unless it's like, hardcore budget.
Posted on Reply
#39
evernessince
DenverYou need to multiply by 3, no joke.
Nvidia has every reason to release everything overpriced.

- Massive chips.
- More expensive process, at least 2X more expensive.
- Overstock of current generation GPUs.

4090 = U$ 1999
4080 = U$ 1499
People seem to point to the cost of a new node every time there is a shrink but there are multiple prior examples where costs increased by a lot yet there was no large increase to GPU prices:



Looking at the cost of a wafer in a singular point in time is a very narrow examination of the total costs to build a GPU. For starters the cost of a node isn't a static amount, it drops over time. That price was given 2 years ago and it has almost certainly dropped by a good amount. Second, AMD and Nvidia are not paying market rate. They sign a wafer supply agreement with a pre-negotiated price.

On top of those factors, 5nm (according to TSMC) has higher yield than their 7nm node: www.anandtech.com/show/16028/better-yield-on-5nm-than-7nm-tsmc-update-on-defect-rates-for-n5

As evidenced by the chart above, 7nm was also an extremely expensive node jump yet Nvidia saw a record 68% profit margin on those GPUs. Clearly there is much much more to the cost of building GPUs than a single static price of a node from 2 years ago.
Posted on Reply
#40
ymdhis
evernessinceThose prices would not work with a competitive AMD and in the current market.
AMD can just raise prices, to make everyone see that they are serious contenders and not budget options any more. Like they did with the rx5700.
Posted on Reply
#41
Denver
evernessincePeople seem to point to the cost of a new node every time there is a shrink but there are multiple prior examples where costs increased by a lot yet there was no large increase to GPU prices:



Looking at the cost of a wafer in a singular point in time is a very narrow examination of the total costs to build a GPU. For starters the cost of a node isn't a static amount, it drops over time. That price was given 2 years ago and it has almost certainly dropped by a good amount. Second, AMD and Nvidia are not paying market rate. They sign a wafer supply agreement with a pre-negotiated price.

On top of those factors, 5nm (according to TSMC) has higher yield than their 7nm node: www.anandtech.com/show/16028/better-yield-on-5nm-than-7nm-tsmc-update-on-defect-rates-for-n5

As evidenced by the chart above, 7nm was also an extremely expensive node jump yet Nvidia saw a record 68% profit margin on those GPUs. Clearly there is much much more to the cost of building GPUs than a single static price of a node from 2 years ago.
I don't think the current situation can be compared to any other time in the past. At this point TSMC is alone at the forefront, effectively being the only option for high-end products, all the volume is focused on TSMC, even Qualcomm has given up on manufacturing the high-end chips at Samsung due to the latter's low efficiency and yields. What stops TSMC from charging as much as they want?

If I remember correctly, the only 7nm chips that Nvidia has are the ones focused on servers that are sold many times more expensive than gaming GPUs. So no need to ask how Nvidia keeps the profit margin so high...
Posted on Reply
#42
Vayra86
Selayathere's plenty of games (especially the moddable ones) where you'll easily exhaust even 32gib of memory. if i were to make a build today, i would put at least 32gib unless it's like, hardcore budget.
Plenty... a handful ;) And only if you go complete crazy on them. So yeah, sure for that tiny 1% subset, there is a market over 16GB RAM (in gaming).

The very vast majority can make everything happen with far less. VRAM, sure, you could get 16GB today and fill it up. But I think with the new architectures, 12GB is fine, and it will also carry all older titles that can't use said architectural improvements - for that, the olde 8GB was just on the edge of risk; and its also why those 11GB Pascal GPUs were in a fine top spot. They will munch through anything you can get right now, and will run into GPU core constraints if you push the latest hard. Its similar to the age of 4GB Maxwell while the 980ti carried 6GB. It is almost exclusively the 6GB that carried it another few years longer over everything else that competed. I think we're at that point now, but for 12GB versus 8 or 10; - not 16. The reason is console developments; and parity with the PC mainstream. We're still dealing with developers, market share, and specs targeted at those crowds. Current consoles have 16GB total but they have to run an OS; and as a result both run in some crippled combo which incurs speed penalties. At best, either console will have 13,5GB at a (much) lower bandwidth/speed. That's the reason a tiny handful of games can already exhibit small problems with 8 or 10.

16GB, perhaps only in the very top end is going to matter much like it does now, nice to have at best in a niche. Newer titles that lean heavy on VRAM will also be leaning on the new architectures. We've also seen that 10GB 3080's are mostly fine, but occasionally/rarely not. 12GB should carry another few years at least even on top end.

That said... I think we're very close to 'the jump to 32GB RAM' as a mainstream gaming option for PCs, but that'll really only be useful in more than a few games by the time the next console gen is out, and DDR5 is priced to sanity at decent speeds.

The TL DR / my point: there is no real 'sense' in chasing ahead way over the mainstream 'high end' gaming spec. Games won't utilize it proper, won't even support it sometimes, or you just will find a crapload of horsepower wasted for more heat and noise. Its just getting the latest greatest for giggles. Let's call it what it is - the same type of sense that lives in gamer minds buying that 3090(ti) over that 3080 at an immense premium.
Posted on Reply
#43
Sisyphus
lexluthermiesterThere will only be a few games that take advantage of and benefit from the the 16GB of VRAM, but there will be benefit!
16 GB is a big plus, if used in a productive environment. Spared VRam is always used as fast cache, speeding up most operations. For 700$, I would buy it to upgrade from my 2070.
ZoneDymothis seems like a mistake...12gb for the future with RT does not seem sufficient
GPUs are bought for actual needs. It's never a good deal, to buy a more advanced card to be safe for 3-4 years. Nobody knows, whats coming and GPUs can fall rapidly in price. If it fits your actual needs, it's fine.
Posted on Reply
#44
evernessince
ymdhisAMD can just raise prices, to make everyone see that they are serious contenders and not budget options any more. Like they did with the rx5700.
AMD's prices are already comparable to Nvidia's save for the ultra-high end. There's not much room to raise prices except there and it's questionable whether consumers are even fine with Nvidia's pricing.
DenverI don't think the current situation can be compared to any other time in the past. At this point TSMC is alone at the forefront, effectively being the only option for high-end products, all the volume is focused on TSMC, even Qualcomm has given up on manufacturing the high-end chips at Samsung due to the latter's low efficiency and yields. What stops TSMC from charging as much as they want?

If I remember correctly, the only 7nm chips that Nvidia has are the ones focused on servers that are sold many times more expensive than gaming GPUs. So no need to ask how Nvidia keeps the profit margin so high...
The fact that we already know the market pricing of 5nm says that no, in fact TSMC is not just charging whatever they want. As the chart I linked demonstrated, the price increase of wafers to 5nm isn't the first time the market has seen such a jump.

Clearly AMD doesn't have any issues with the cost of 7nm either. They have both GPUs and CPUs on the node.

What's stopping TSMC from jacking up prices? The fact that if they did it would disrupt the word's most valuable resource, chips, and cause a government crackdown and a potential loss of massive subsidies they are receiving to build new fabs. TSMC is going to be relying on help from foreign governments should the Chinese come knocking. It doesn't make any sense for TSMC to disrupt the status quo when they already are receiving tons of profits and subsidies being a reasonable chip manufacturer.
Posted on Reply
#45
Sisyphus
ppnThe whole idea of RT was not to use any memory, because it's all made out of Ray tracing magic, just rays coordinates as opposed to heavy textures. why the need for extra VRAM.
Ray tracing is for simulation of photorealistic lighting and shadows, it needs more texture memory if you have surfaces with a complex physical behavior like human skin. Memory usage depends mainly on resolution and distance, not RT.
Posted on Reply
#46
Tomgang
The 16 gb variant would be the most interesting card. But price and performance also play in. For not to forget the power usage as electricity is far from cheap, specially here in Europe where I live. Also I don't really know if I would even be able to purchase a card, as the economic situation in Europe is very unstable now, that Russia has cut of gaz.

So I might be forced to just stay with my rtx 3080. Not a bad card, but the 10 gb vram comes in short a bit to often amd that's annoying.
Posted on Reply
#47
erocker
*
CallandorWoTNvidia will use inflation as an excuse. I fully expect a 4080 12gb to cost around $799 and the 16gb to cost $899
They still have to compete with supply, demand and competition. The first release of the cards could go as high as you say, but with the current overstock of cards, I don't think it will be as bad.
Posted on Reply
#48
kapone32
Not sure at all but it could also be the separation between the Laptop GPU vs Desktop. With Nvidia you never know.
Posted on Reply
#49
ZoneDymo
lexluthermiesterThat depends on settings and resolution.
I mean sure, but who is going to get an RTX4080 only to run limited settings because you got the 12gb model....its just a weird limitation that should nto exist.
Posted on Reply
#50
dont whant to set it"'
I voted for the ten bucks in this poll because nGreedias planed adolescence with the high-end range 3xxx series. Tried I have to get one at launch but nCheatia was hell bent on giving them to "tubers" for "glamour".
Posted on Reply
Add your own comment
Dec 18th, 2024 03:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts