Tuesday, September 6th 2022

NVIDIA GeForce RTX 4080 12GB and 16GB to Launch Simultaneously

The rumored 12 GB and 16 GB variants of the upcoming NVIDIA GeForce RTX 4080 "Ada" graphics cards could launch simultaneously, according to MEGAsizeGPU, who broke the story about the presence of two memory-based variants of the RTX 4080. A simultaneous launch of the two would make things similar to that of the GTX 1060 series, which came in 3 GB and 6 GB variants. Besides memory size, the two variants of the GTX 1060 differed in core-configuration (mainly CUDA core count), which widened the performance gulf between the two. The more recent example of memory-based variants is with the RTX 3080—which comes in 10 GB and 12 GB variants with different CUDA core counts; but which were launched far apart from each other.
Sources: MEGAsizeGPU (Twitter), VideoCardz
Add your own comment

43 Comments on NVIDIA GeForce RTX 4080 12GB and 16GB to Launch Simultaneously

#26
HisDivineOrder
RedelZaVednoMy bet 4080 12 gigs = $699, 16 gigs = $799. Availability 12 gigs out of stock instantly, 16 gigs plenty.
It's huang's way of hiking price in recession without getting bad publicity from the press:
More likely they primed the pump of $999 12gig cards so this gen they could launch a $1k 4080 16G and with the recent problems, they've changed some of their plans to release a 4070 12G as a 4080 to help consumers feel like they didn't raise prices by having it at $700.

So my guess: 12G $700, 16G $1k. 4090 lands at $2k.
Posted on Reply
#27
ModEl4
HisDivineOrderMore likely they primed the pump of $999 12gig cards so this gen they could launch a $1k 4080 16G and with the recent problems, they've changed some of their plans to release a 4070 12G as a 4080 to help consumers feel like they didn't raise prices by having it at $700.

So my guess: 12G $700, 16G $1k. 4090 lands at $2k.
In an extremely optimistic scenario, I'm really hopping for AD102 4090 24GB/4080Ti 20GB at $1299/$1099 (there was no apparent reason to officially cut the price of 3090ti/3090/3080Ti at these specific prices, street price drops would be sufficient to do the job just the same unless partners were unwilling to lower prices/margins so Nvidia pressured them with an official announcement)
Probably not happening, I guess I'm reading too much into it.
Posted on Reply
#28
AusWolf
If this rumor is true, I'll never understand why both of them have to be called 4080. It's almost like the DDR4 and GDDR5 versions of the 1030. One of them could have easily been called the 1020, but no! Sales above customer satisfaction with nvidia!
Posted on Reply
#30
FeelinFroggy
At least it is not 24gb of RAM. I know the 4090 is supposed to house that much and that is just silly for a gaming card. Most of that VRAM will sit idle doing nothing.

Probably will need a chunk of VRAM for 8k and maybe that is the point. VR also uses a lot of RAM, but 24gb is a stretch there too.
Posted on Reply
#31
Bruno_O
ModEl4In an extremely optimistic scenario, I'm really hopping for AD102 4090 24GB/4080Ti 20GB at $1299/$1099 (there was no apparent reason to officially cut the price of 3090ti/3090/3080Ti at these specific prices, street price drops would be sufficient to do the job just the same unless partners were unwilling to lower prices/margins so Nvidia pressured them with an official announcement)
Probably not happening, I guess I'm reading too much into it.
the market is flooded with Ampere/RDNA2, used cards from mining will start dropping HARD after the ETH merge scheduled for next week...
also, according to leaks, RDNA3 is massively more efficient than nVidia's 4000 series, and will compete at 4080/4090 levels

then add inflation, global economic crises, energy crises, etc. etc.

what I'm trying to say here is that neither AMD nor nVidia will be able to charge prices like they were a year ago - everything has changed

so a very realistic scenario is everything at MSRP, and the same MSRPs as Ampere, aka 4080 at 699 USD.
Posted on Reply
#32
Easo
I really hope this stays just a rumour.
Posted on Reply
#33
Minus Infinity
thunderingroarthey better not make 4070 160bit 10GB :banghead:
I think it is 192 bit, but that could be the 4070 Ti. 4060 is definitely 160 bit though. Funnily 7700XT is 256 bit this time around and 7600XT 192 bit.

AMD and Nvidia swapping bus widths this gen.

Any how how would a 12GB and 16GB work? Surely you can only get 16GB on a 256/512 bit bus and 12GB on 192/384 bit bus. Higher end card has smaller bus??? It would need massively faster memory to offer more bandwidth or have a much larger cache like IC.
Posted on Reply
#34
Xaled
cakeredHmm, *to* and *could* are very different statements, can we update the title to make that clearer?
Not actually, both are still not a "will" though ..
Posted on Reply
#35
Arco
There is no reason for a high-end card like this to have 12 GB of Vram in 2022. The 1080 TI had 11GB and that was back in 2016! 16GB is pretty much the minimum for future-proofing at 4k.
Posted on Reply
#36
Xaled
ArcoThere is no reason for a high-end card like this to have 12 GB of Vram in 2022. The 1080 TI had 11GB and that was back in 2016! 16GB is pretty much the minimum for future-proofing at 4k.
there is; Deception, same as they've been doing in laptops, with dozens of VRam, Clockspeeds, Watt etc variations as nobody has punished them, and laptop vendors or stopped them yet
Posted on Reply
#38
efikkan
thunderingroarthey better not make 4070 160bit 10GB :banghead:
Why would this keep you up at night? ;)
Historically the 70-cards have been some of the best resource balanced cards in most of Nvidia's generation. If they choose to put 10 GB 160 bit memory on it, then that's likely going to work out well.
I think 160 bit is fairly unlikely. While it's not impossible, memory controllers are usually enabled as multiples of 64 bit (as each 64 bit is actually a separate memory controller, i.e. 256 bit is four memory controllers). The memory controllers are connected to 32-bit memory chips, so it is technically possible to enable "half" of a memory controller, but it's fairly rare.
Minus InfinityAny how how would a 12GB and 16GB work? Surely you can only get 16GB on a 256/512 bit bus and 12GB on 192/384 bit bus. Higher end card has smaller bus??? It would need massively faster memory to offer more bandwidth or have a much larger cache like IC.
It is technically possible to putt different amounts of memory on various memory controllers, like GTX 660 Ti did back in the day. This would effectively make parts of the memory slower, as more memory are connected to that memory controller.
ArcoThere is no reason for a high-end card like this to have 12 GB of Vram in 2022. The 1080 TI had 11GB and that was back in 2016! 16GB is pretty much the minimum for future-proofing at 4k.
Future proofing with extra VRAM has never panned out in the past.
As I've explained many times before, as future games get more demanding, bandwidth requirements and computational load increases more than VRAM usage, and these will become bottlenecks long before VRAM does. The only exception to this would be if you gradually sacrifice FPS for max details in future games, pushing the frame rate low and the VRAM usage artificially high. But even then, memory bandwidth will probably bottleneck you.
Posted on Reply
#39
thunderingroar
efikkanWhy would this keep you up at night? ;)
Historically the 70-cards have been some of the best resource balanced cards in most of Nvidia's generation. If they choose to put 10 GB 160 bit memory on it, then that's likely going to work out well.
Because GTX 1070 launched with 8GB 6 years ago, its about time for an upgrage. And im not gonna trust Nvidia to resource balance the cards specs for me lmao, if it were up to them they would make all their cards with perfect planned obsolescence to make people upgrade more often. 8GB cards already struggle in some games in 4K and in plenty of games when you start adding up mods.

[URL='https://www.youtube.com/watch?v=5xAaQzaMsug']"To all my Pascal gamer friends, it is safe to upgrade now"[/URL]

efikkanFuture proofing with extra VRAM has never panned out in the past.
This couldn't be further from the truth, 6GB GTX 1060 aged much better than 3GB version, same with 8GB Rx 580. Not only could my 8GB Rx 580 mine (unlike 4GB one) which made its 2nd hand market price quite a bit higher, it could also play the games like Horizon ZD, RE 2 & 3, SoTTR, Doom eternal... on High/Very high texture settings instead of Medium, with a minimal performance hit (maybe like ~5% if not less) and those games really did look A LOT better on high rather than medium texture settings.
Do me a favor and try adding texture pack mods to skyrim/witcher 3 and watch it bring those lower VRAM cards to its knees
efikkanI think 160 bit is fairly unlikely. While it's not impossible, memory controllers are usually enabled as multiples of 64 bit
Both 1080ti and 2080ti had 352bit bus
RedelZaVednoMy bet 4080 12 gigs = $699, 16 gigs = $799. Availability 12 gigs out of stock instantly, 16 gigs plenty.
It's huang's way of hiking price in recession without getting bad publicity from the press:
Well that price prediction aged like milk
Posted on Reply
#40
efikkan
thunderingroarBecause GTX 1070 launched with 8GB 6 years ago, its about time for an upgrage. And im not gonna trust Nvidia to resource balance the cards specs for me lmao, if it were up to them they would make all their cards with perfect planned obsolescence to make people upgrade more often.
Excellent, thanks for bringing the daily dose of conspiracy theories. :rolleyes:
If anything Pascal has proven to be one of the best long-term "investment" of the GPU generations of the last ~15 years or so.
thunderingroar8GB cards already struggle in some games in 4K and in plenty of games when you start adding up mods.
You can't expect an upper mid-range card from 6 years ago to run everything at the highest details forever.
thunderingroarThis couldn't be further from the truth, 6GB GTX 1060 aged much better than 3GB version, same with 8GB Rx 580…
GTX 1060 6GB was also faster than the 3GB version.
You fail to understand some basic facts about how rendering works; Once an architecture is designed, the amount of memory bandwidth and memory capacity required to store and render a given size texture is fixed, it doesn't matter how much they optimize drivers, make new fancy game engines, invent new algorithms, etc. This ratio is still fixed, which is why Nvidia and AMD can know whether they have balanced the resources correctly or not.
If you keep adding texture detail, this is inevitably going to slow down the frame rate, which means for a well balanced card, you will get a slide show long before the card runs out of memory.
Posted on Reply
#41
thunderingroar
efikkanExcellent, thanks for bringing the daily dose of conspiracy theories. :rolleyes:
If anything Pascal has proven to be one of the best long-term "investment" of the GPU generations of the last ~15 years or so.
And one of the biggest reasons for it is because NV pretty much doubled pascal's vram amoun over maxwell. Planned obsolescence is now a conspiracy theory? lol okay bud just look at the smartphone industry where they closed up the phones, axed removable battery feature and hid it behind an excuse of dust/waterproofness
efikkanYou can't expect an upper mid-range card from 6 years ago to run everything at the highest details forever.
3070ti was a $600 8GB card from 2021 with the same issues?????
efikkanGTX 1060 6GB was also faster than the 3GB version.
You fail to understand some basic facts about how rendering works; Once an architecture is designed, the amount of memory bandwidth and memory capacity required to store and render a given size texture is fixed, it doesn't matter how much they optimize drivers, make new fancy game engines, invent new algorithms, etc. This ratio is still fixed, which is why Nvidia and AMD can know whether they have balanced the resources correctly or not.
If you keep adding texture detail, this is inevitably going to slow down the frame rate, which means for a well balanced card, you will get a slide show long before the card runs out of memory.
This is just a classic copium overdose by the people who ended up buying the lower vram versions. Next you re gonna tell me about the difference between vram allocation and actual usage?

What i said in my previous post was perfectly clear. Even if the 3gb 1060 didnt have cut down SM count, you still couldnt play a lot of more recent AAA games on high texture settings, simply because you run out of vram, even on MEDIUM quality preset. 3GB GTX 1060 and 4GB Rx 580 aged poorly compared to their double vram brothers and theres empirical evidence for it.


11:50 From previous testings 3GB 1060 used to be on average 7% slower than 6GB one, and with more modern games its on average 32% slower on high preset. And also the common excuse of "The card will become irrelevant by its rasterization capabilities faster than it does by its vram capacity" its also debunked. Games featured in that video like SoTTR or RE8 are playable with 60fps and Doom eternal with 105fps on higher presets with 6GB card whereas the 3GB one crumbles in fps or simply will not run the game like it did with Doom eternal
Posted on Reply
#42
efikkan
thunderingroar3070ti was a $600 8GB card from 2021 with the same issues?????
3070 Ti 8GB would be very roughly 2.5x faster than GTX 1070 8GB in 4K, something it wouldn't manage if it was bottlenecked by VRAM. So the facts are not on your side.
And keep in mind, when a card is actually runs out of VRAM the frame rate drops significantly. Game can potentially even glitch or even break, but those are sympthoms of game bugs.
thunderingroarNext you re gonna tell me about the difference between vram allocation and actual usage?
There is a big difference, but you apparently don't care about facts of this subject.
There is also the fact that newer generations have more advanced memory management and memory compression, so effectively 8GB is "worth more" on RTX 3070 Ti than GTX 1070.
thunderingroarWhat i said in my previous post was perfectly clear. Even if the 3gb 1060 didnt have cut down SM count, you still couldnt play a lot of more recent AAA games on high texture settings, simply because you run out of vram, even on MEDIUM quality preset. 3GB GTX 1060 and 4GB Rx 580 aged poorly compared to their double vram brothers and theres empirical evidence for it.
It's funny how none of those cards are fast enough to run 4K high details at a stable 60 FPS in most games, they even struggle in 1440p at medium in most. So this is a moot point, the cards are aleady irrelevant then.

And it's funny that you'd even mention RX 580. RX 580(/480) was supposed to age like fine wine compared to GTX 1060, thanks to more VRAM and "unoptimized" drivers. Well, that still haven't happened yet.
Posted on Reply
#43
thunderingroar
efikkanAnd keep in mind, when a card is actually runs out of VRAM the frame rate drops significantly.
Depends, sometimes if you re right on the edge of capacity you can lose something around ~15-20% of fps
efikkanThere is a big difference, but you apparently don't care about facts of this subject.
There is also the fact that newer generations have more advanced memory management and memory compression, so effectively 8GB is "worth more" on RTX 3070 Ti than GTX 1070.
Holy shit i was obviously being sarcastic since thats the point people always bring up. And the improvements in memory compression are in the range of like 200MB which isnt something to write home about
efikkanIt's funny how none of those cards are fast enough to run 4K high details at a stable 60 FPS in most games, they even struggle in 1440p at medium in most. So this is a moot point, the cards are aleady irrelevant then.
Okay you re just being purposefully disingenuous at this point. GTX 1060 was never meant for higher resolutions, it pretty much always aimed for 1080p gaming, whereas 1070 was recommended for 1440p and 1080SLI or 1080ti for 4k. And today 3GB one is 32% slower compared to 7% before when tested in older games. Cant even play doom eternal, a 2 year old game, on high preset lmao
efikkanAnd it's funny that you'd even mention RX 580. RX 580(/480) was supposed to age like fine wine compared to GTX 1060, thanks to more VRAM and "unoptimized" drivers. Well, that still haven't happened yet.
People usually said that about vegas, where they did age somewhat better. V64 is now on par/bit faster than GTX 1080. Tbh i was never a big fan of those cards since they were power hogs but ive heard that they re fun to tinker with and they undervolt very well. And about Rx580, it used to be equal/slightly slower than GTX 1060 and now its ~5% faster (even in that link which you posted). Its negligible/unnoticable, but it still did age slightly better
Posted on Reply
Add your own comment
Dec 18th, 2024 15:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts