Tuesday, September 6th 2022
NVIDIA GeForce RTX 4080 12GB and 16GB to Launch Simultaneously
The rumored 12 GB and 16 GB variants of the upcoming NVIDIA GeForce RTX 4080 "Ada" graphics cards could launch simultaneously, according to MEGAsizeGPU, who broke the story about the presence of two memory-based variants of the RTX 4080. A simultaneous launch of the two would make things similar to that of the GTX 1060 series, which came in 3 GB and 6 GB variants. Besides memory size, the two variants of the GTX 1060 differed in core-configuration (mainly CUDA core count), which widened the performance gulf between the two. The more recent example of memory-based variants is with the RTX 3080—which comes in 10 GB and 12 GB variants with different CUDA core counts; but which were launched far apart from each other.
Sources:
MEGAsizeGPU (Twitter), VideoCardz
43 Comments on NVIDIA GeForce RTX 4080 12GB and 16GB to Launch Simultaneously
So my guess: 12G $700, 16G $1k. 4090 lands at $2k.
Probably not happening, I guess I'm reading too much into it.
Probably will need a chunk of VRAM for 8k and maybe that is the point. VR also uses a lot of RAM, but 24gb is a stretch there too.
also, according to leaks, RDNA3 is massively more efficient than nVidia's 4000 series, and will compete at 4080/4090 levels
then add inflation, global economic crises, energy crises, etc. etc.
what I'm trying to say here is that neither AMD nor nVidia will be able to charge prices like they were a year ago - everything has changed
so a very realistic scenario is everything at MSRP, and the same MSRPs as Ampere, aka 4080 at 699 USD.
AMD and Nvidia swapping bus widths this gen.
Any how how would a 12GB and 16GB work? Surely you can only get 16GB on a 256/512 bit bus and 12GB on 192/384 bit bus. Higher end card has smaller bus??? It would need massively faster memory to offer more bandwidth or have a much larger cache like IC.
Historically the 70-cards have been some of the best resource balanced cards in most of Nvidia's generation. If they choose to put 10 GB 160 bit memory on it, then that's likely going to work out well.
I think 160 bit is fairly unlikely. While it's not impossible, memory controllers are usually enabled as multiples of 64 bit (as each 64 bit is actually a separate memory controller, i.e. 256 bit is four memory controllers). The memory controllers are connected to 32-bit memory chips, so it is technically possible to enable "half" of a memory controller, but it's fairly rare. It is technically possible to putt different amounts of memory on various memory controllers, like GTX 660 Ti did back in the day. This would effectively make parts of the memory slower, as more memory are connected to that memory controller. Future proofing with extra VRAM has never panned out in the past.
As I've explained many times before, as future games get more demanding, bandwidth requirements and computational load increases more than VRAM usage, and these will become bottlenecks long before VRAM does. The only exception to this would be if you gradually sacrifice FPS for max details in future games, pushing the frame rate low and the VRAM usage artificially high. But even then, memory bandwidth will probably bottleneck you.
[URL='https://www.youtube.com/watch?v=5xAaQzaMsug']"To all my Pascal gamer friends, it is safe to upgrade now"[/URL]
This couldn't be further from the truth, 6GB GTX 1060 aged much better than 3GB version, same with 8GB Rx 580. Not only could my 8GB Rx 580 mine (unlike 4GB one) which made its 2nd hand market price quite a bit higher, it could also play the games like Horizon ZD, RE 2 & 3, SoTTR, Doom eternal... on High/Very high texture settings instead of Medium, with a minimal performance hit (maybe like ~5% if not less) and those games really did look A LOT better on high rather than medium texture settings.Do me a favor and try adding texture pack mods to skyrim/witcher 3 and watch it bring those lower VRAM cards to its knees Both 1080ti and 2080ti had 352bit bus Well that price prediction aged like milk
If anything Pascal has proven to be one of the best long-term "investment" of the GPU generations of the last ~15 years or so. You can't expect an upper mid-range card from 6 years ago to run everything at the highest details forever. GTX 1060 6GB was also faster than the 3GB version.
You fail to understand some basic facts about how rendering works; Once an architecture is designed, the amount of memory bandwidth and memory capacity required to store and render a given size texture is fixed, it doesn't matter how much they optimize drivers, make new fancy game engines, invent new algorithms, etc. This ratio is still fixed, which is why Nvidia and AMD can know whether they have balanced the resources correctly or not.
If you keep adding texture detail, this is inevitably going to slow down the frame rate, which means for a well balanced card, you will get a slide show long before the card runs out of memory.
What i said in my previous post was perfectly clear. Even if the 3gb 1060 didnt have cut down SM count, you still couldnt play a lot of more recent AAA games on high texture settings, simply because you run out of vram, even on MEDIUM quality preset. 3GB GTX 1060 and 4GB Rx 580 aged poorly compared to their double vram brothers and theres empirical evidence for it.
11:50 From previous testings 3GB 1060 used to be on average 7% slower than 6GB one, and with more modern games its on average 32% slower on high preset. And also the common excuse of "The card will become irrelevant by its rasterization capabilities faster than it does by its vram capacity" its also debunked. Games featured in that video like SoTTR or RE8 are playable with 60fps and Doom eternal with 105fps on higher presets with 6GB card whereas the 3GB one crumbles in fps or simply will not run the game like it did with Doom eternal
And keep in mind, when a card is actually runs out of VRAM the frame rate drops significantly. Game can potentially even glitch or even break, but those are sympthoms of game bugs. There is a big difference, but you apparently don't care about facts of this subject.
There is also the fact that newer generations have more advanced memory management and memory compression, so effectively 8GB is "worth more" on RTX 3070 Ti than GTX 1070. It's funny how none of those cards are fast enough to run 4K high details at a stable 60 FPS in most games, they even struggle in 1440p at medium in most. So this is a moot point, the cards are aleady irrelevant then.
And it's funny that you'd even mention RX 580. RX 580(/480) was supposed to age like fine wine compared to GTX 1060, thanks to more VRAM and "unoptimized" drivers. Well, that still haven't happened yet.