Saturday, September 19th 2020

NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

A GIGABYTE webpage meant for redeeming the RTX 30-series Watch Dogs Legion + GeForce NOW bundle, lists out eligible graphics cards for the offer, including a large selection of those based on unannounced RTX 30-series GPUs. Among these are references to a "GeForce RTX 3060" with 8 GB of memory, and more interestingly, a 20 GB variant of the RTX 3080. The list also confirms the RTX 3070S with 16 GB of memory.

The RTX 3080 launched last week comes with 10 GB of memory across a 320-bit memory interface, using 8 Gbit memory chips, while the RTX 3090 achieves its 24 GB memory amount by piggy-backing two of these chips per 32-bit channel (chips on either side of the PCB). It's conceivable that the the RTX 3080 20 GB will adopt the same method. There exists a vast price-gap between the RTX 3080 10 GB and the RTX 3090, which NVIDIA could look to fill with the 20 GB variant of the RTX 3080. The question on whether you should wait for the 20 GB variant of the RTX 3080 or pick up th 10 GB variant right now, will depend on the performance gap between the RTX 3080 and RTX 3090. We'll answer this question next week.
Source: VideoCardz
Add your own comment

157 Comments on NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

#26
lexluthermiester
InVasManiIt's not listed, but could see a RTX 3060S 16GB being possible as well eventually.
That would be interesting.
phillWhatever happened to just giving the different levelled tiers of GPUs normal amounts of VRAM... 1, 2, 4, 8, 16, 32GB etc.. I don't get it why we need 10GB or 20GB....
With you on that one. I want a 16GB model of the 3080. I don't mind if it's only 256bit memory bus. A 16GB 3070 would also be good.
Posted on Reply
#27
ppn
BubsterGlad this news update came out to calm the ampere hysteria down...3080 with 20 gig
Not the same shader count though 10240 vs 8704.
Posted on Reply
#28
r9
What are the expectations price performance for RTX3060?

Price ~$350
Perform ~RTX2070
Posted on Reply
#29
Vya Domus
A 20GB 3080 would definitely be more enticing but I don't want to find out what the price would be. By the way, 12GB is much more likely than 20.
phillWhatever happened to just giving the different levelled tiers of GPUs normal amounts of VRAM... 1, 2, 4, 8, 16, 32GB etc.. I don't get it why we need 10GB or 20GB....
You can't just put any memory configuration, the VRAM capacity is linked with that the GPU's memory controllers and bus interfaces can do.
Posted on Reply
#30
AusWolf
BluesFanUKAre people that desperate they can't wait another month to see what AMD have to offer? Nvidia have a history of mugging off their customers, day one purchasers are setting themselves up for buyers remorse.

Give it a month or two and who knows, Turing cards may start tumbling or AMD could knock it out the park. Personally i'm just being sensible and buying a PS5 for the same cost as what one of these overpriced GPU's cost.
That's cool, though buying a console that's only good for playing games, and then buying all my circa 300 games that I own on Steam again, and playing them with a useless controller instead of WASD is totally out of the question for me. Besides, building a new PC is fun, plugging a box into my TV is boring.

As for the desperation part: I agree. Better to wait than to buy the first released, inferior product.
Posted on Reply
#31
ppn
r9What are the expectations price performance for RTX3060?

Price ~$350
Perform ~RTX2070
$350 is where usually 60% perf of the $700 xx80 cards lands. like 1060 and 2060 were.

RTX 3060 8G 4864 ~~ RTX 2080/S ~~ 60% of RTX 3081

RTX 3080 10G 8704 new shader is 31% faster than 4352 turing, and the memory is only 23% faster,
so to get the average of 31% the shader must be pulling ahead with being 39% faster ~~6144 shaders fp32 and 2560 int32.
Posted on Reply
#32
Space Lynx
Astronaut
BArmsMaybe they should launch the original 3080 first? I really don't consider meeting less than 1% of the demand a real launch.
Look at my signature. Soon you will join me in the true gaming realm my padawan.
Posted on Reply
#33
Valantar
londisteBecause it is not the classic 60-tier GPU. From leaks and details we have so far, both 3070 and 3060 will be based on GA104. When it comes to performance we will have to wait and see but 3060 should be roughly around PS5/XBSX performance level, so it will be enough for a long while. 16GB is more of a marketing argument (and maybe preemptive strike against possible AMD competition).
That makes no sense. What defines a 60-series card? Mainly that it's two tiers down from the 80-series. Which die it's based on, how wide a memory bus it has, etc. is all variable and dependent on factors that come before product segmentation (die yields, production capacity, etc.). Which die the 3060 is based on doesn't matter whatsoever - it's specifications decide performance. Heck, there have been 2060s based on at least three different Turing dice, and they all perform identically in most workloads. The 3060 might obviously be around the XSX performance level (saying "XSX/PS5 performance level" is quite broad given that one is 20% faster than the other), or at least the PS5 performance level, but that still doesn't mean 8GB isn't plenty for it. People really need to stop this idiocy about VRAM usage being the be-all, end-all of performance longevity - that is only true for a select few GPUs throughout history.
londistePCI-e 4.0 x16 does not really seem to be a bottleneck yet and probably won't be a big one for a long while when we look at how the scaling testing has gone with 3.0 and 2.0. Fitting 4 lanes worth of data shouldn't matter all that much. On the other hand, I think this shader-augmented compression is short-lived - if it proves very useful, compression will move into hardware as it has already supposedly done in consoles.

Moving the storage to be attached to GPU does not really make sense for desktop/gaming use case. More bandwidth through compression and some type of QoS scheme to prioritize data as needed should be enough and this is where it really seems to be moving towards.
I agree that it'll likely move into dedicated hardware, but that hardware is likely to live on the GPU, as that is where the data will be needed. Adding this to CPUs makes little sense - people keep CPUs longer than GPUs, GPUs have (much) bigger power budgets, and for the type of compression in question (specifically DirectStorage-supported algorithms) games are likely to be >99% of the use cases.

As for creating a bottleneck, it's still going to be quite a while until GPUs saturate PCIe 4.0 x16 (or even 3.0 x16), but SSDs use a lot of bandwidth and need to communicate with the entire system, not just the GPU. Sure, the GPU will be what needs the biggest chunks of data the quickest, but chaining an SSD off the GPU still makes far less sense than just keeping it directly attached to the PCIe bus like we do today. That way everything gets near optimal access. The only major improvement over this would be the GPU using the SSD as an expanded memory of sorts (like that oddball Radeon Pro did), but that would mean the system isn't able to use it as storage. And I sincerely doubt people would be particularly willing to add the cost of another multi-TB SSD to their PCs without getting any additional storage in return.
Posted on Reply
#34
rtwjunkie
PC Gaming Enthusiast
BArmsNot sure but I thought the idea was to load textures/animations/model vertex data etc into VRAM where it needs to go anyway.
Quite a few games just have lazy devs that load far more into VRAM than necessary to play the game. We are still not at the stage where we NEED that much VRAM.
Posted on Reply
#35
Nater
rtwjunkieQuite a few games just have lazy devs that load far more into VRAM than necessary to play the game. We are still not at the stage where we NEED that much VRAM.
Every thread is going on about this VRAM "issue". Look at the system requirements for Cyberpunk 2077...I don't think we're hitting a wall here anytime soon.
Posted on Reply
#36
ppn
ValantarThe 3060 might obviously be around the XSX performance level (saying "XSX/PS5 performance level" is quite broad given that one is 20% faster than the other),
3060 6GB 3840 cuda.
3060 8GB 4864 cuda

there you have it, 8GB version is 30% faster,
Posted on Reply
#37
Vayra86
I thought 10GB was enough guys... :)

Guess Nvidia doesn't agree and brings us the real deal after the initial wave of stupid bought the subpar cards.

Well played, Huang.
Posted on Reply
#38
Lionheart
AusWolfIf nVidia taught us something with the 20 (Super) series, it's the fact that early buyers get inferior products. That's why I'm going to wait for the 3070 Super/Ti with 16 GB VRAM and a (hopefully) fully unlocked die, unless AMD's RDNA 2 proves to be a huge hit.
This times 100. I'm playing the waiting game, tbh we all are cause no one can get a new card anyways.
Posted on Reply
#39
xorbe
Need to see 3070S 16GB vs 2080Ti benchmarks.
Posted on Reply
#40
CrAsHnBuRnXp
3080 20GB is going to be the Ti variant. Calling it now.
Posted on Reply
#41
InVasMani
ValantarWhy on earth would a 60-tier GPU in 2020 need 16GB of VRAM? Even 8GB is plenty for the types of games and settings that GPU will be capable of handling for its useful lifetime. Shader and RT performance will become a bottleneck at that tier long before 8GB of VRAM does. This is no 1060 3GB.

RAM chip availability is likely the most important part here. The FE PCB only has that many pads for VRAM, so they'd need double density chips, which likely aren't available yet (at least at any type of scale). Given that GDDR6X is a proprietary Micron standard, there's only one supplier, and it would be very weird if Nvidia didn't first use 100% of available capacity to produce the chips that will go with the vast majority of SKUs.

Does that actually make sense, though? That would mean the GPU sharing the PCIe bus with storage for all CPU/RAM accesses to said storage (and either adding some sort of switch to the card, or adding switching/passthrough capabilities to the GPU die), rather than it being used for only data relevant to the GPU. Isn't a huge part of the point of DirectStorage the ability to transfer compressed data directly to the GPU, reducing bandwidth requirements while also offloading the CPU and also shortening the data path significantly? The savings from having the storage hooked directly to the GPU rather than the PC's PCIe bus seem minuscule in comparison to this - unless you're also positing that this on-board storage will have a much wider interface than PCIe 4.0 x4, which would be a whole other can of worms. I guess it might happen (again) for HPC and the like, for those crunching multi-TB datasets, but other than that this seems nigh on impossible both in terms of cost and board space, and impractical in terms of providing actual performance gains.

Btw, the image also lists two 3080 Super SKUs that the news post doesn't mention.
I didn't say 16GB would be practical, but I could still see it happening. To be fair if they could piggy back 12GB to a RTX3060 down the road that would make much more sense in relation to the weaker harder and you might say similar to the RTX3080 situation going from 10GB to 20GB if they could simply piggy back on a few of the chips rather than every GDDR chip and scale the density further that way that's probably a more ideal scenario for everyone involved except Micron.
londisteBecause it is not the classic 60-tier GPU. From leaks and details we have so far, both 3070 and 3060 will be based on GA104. When it comes to performance we will have to wait and see but 3060 should be roughly around PS5/XBSX performance level, so it will be enough for a long while. 16GB is more of a marketing argument (and maybe preemptive strike against possible AMD competition).
Agreed it's more marketing than practicality, but that said it's a new generation GPU with new capabilities so while it might be more anemic and stretched thin in resources for that amount of VRAM perhaps newer hardware is at least more capable of utilizing it by intelligently managing it with other techniques DLSS, VRS, mesh shading, ect, but we'll see the RTX3060 isn't even out yet so we don't know nearly enough to conclude what it'll be like entirely. GPU's are becoming more flexible at managing resources every generation though on the plus side.
Posted on Reply
#42
P4-630
by piggy-backing two of these chips per 32-bit channel (chips on either side of the PCB).
How hot would these chips get with some thermal pads and just a backplate on top of it.
Posted on Reply
#43
rbgc
Answer from u/NV_Tim, Community Manager from NVIDIA GeForce Global Community Team

Why only 10 GB of memory for RTX 3080? How was that determined to be a sufficient number, when it is stagnant from the previous generation?
-
Justin Walker, Director of GeForce product management

We’re constantly analyzing memory requirements of the latest games and regularly review with game developers to understand their memory needs for current and upcoming games. The goal of 3080 is to give you great performance at up to 4k resolution with all the settings maxed out at the best possible price.

In order to do this, you need a very powerful GPU with high speed memory and enough memory to meet the needs of the games. A few examples. If you look at Shadow of the Tomb Raider, Assassin’s Creed Odyssey, Metro Exodus, Wolfenstein Youngblood, Gears of War 5, Borderlands 3 and Red Dead Redemption 2 running on a 3080 at 4k with Max settings (including any applicable high res texture packs) and RTX On, when the game supports it, you get in the range of 60-100fps and use anywhere from 4GB to 6GB of memory.

Extra memory is always nice to have but it would increase the price of the graphics card, so we need to find the right balance.
-
Posted on Reply
#44
ppn
if MVDCC power shows 70 watt on 3080, this is 7 watts per chip, hard to cool, 3090 i think those are higher density not piggy.

According to spec 3070S should land around 70% of 3080, 2080Ti being 76%. pretty close.
Posted on Reply
#45
T3RM1N4L D0GM4
Caring1So can prep your bots? :roll:
So I can press F5 and see Product unavailable :nutkick:
Posted on Reply
#46
TechLurker
I feel the Radeon SSG prototype helped inform the direct access capability that both upcoming consoles use in slightly different ways, as well as a future possible interim-upgrade path on theoretical mid-high end GPU models in both the gaming and professional areas. Professionally, the SSG showed it can both be used as extra storage on top of being used as an ultra-fast scratch drive, according to Anandtech's article, with only the main hurdle being getting software devs to incorporate the necessary API stuff. It could be a neat feature to install your high-end games onto the GPU drive or save your project to said drive, and let the GPU load it direct from there and effectively "stream" the project/game assets in realtime.

I could see a future Radeon X700+ series and NVIDIA X070+ series of GPUs and their professional equivalents incorporating an option to install an nVME PCIe 4.0 (or 5.0, since that's tech supposedly due late next year or 2022 and expected to last for quite awhile) onto the card, as a way to boost available memory for either professional or gaming purposes. Or maybe Intel could beat the competition to market, using Optane add-ons to their own respective GPUs, acting more like reserve VRAM expansion thanks to higher Read/Write performance than typical NVMe, but less than GDDR/HBM.
Posted on Reply
#47
Dudebro-420
RaendorThere also 3070S with 16 gigs on that same list.
Yeah that's what the article said.
Posted on Reply
#48
Jism
btarunrThe logical next step to DirectStorage and RTX-IO is graphics cards with resident non-volatile storage (i.e. SSDs on the card). I think there have already been some professional cards with onboard storage (though not leveraging tech like DirectStorage). You install optimized games directly onto this resident storage, and the GPU has its own downstream PCIe root complex that deals with it.

So I expect RTX 40-series "Hopper" to have 2 TB ~ 8 TB variants.
Dont work like that. It's only menth / designed as a cache without streaming the data from SSD/HDD/memory over the PCI-E bus.

but i'm sure you could utilitize all that memory one day, as some storage.
Posted on Reply
#49
InVasMani
P4-630How hot would these chips get with some thermal pads and just a backplate on top of it.
Who knows I'm sure if they get real hot the backplate would be a obvious indicator. Ram isn't generally particularly hot though in the first place. It's not like that can't be resolved trivially connecting it with some heat pipe cooling to the bottom heat sink cooling.
TechLurkerI feel the Radeon SSG prototype helped inform the direct access capability that both upcoming consoles use in slightly different ways, as well as a future possible interim-upgrade path on theoretical mid-high end GPU models in both the gaming and professional areas. Professionally, the SSG showed it can both be used as extra storage on top of being used as an ultra-fast scratch drive, according to Anandtech's article, with only the main hurdle being getting software devs to incorporate the necessary API stuff. It could be a neat feature to install your high-end games onto the GPU drive or save your project to said drive, and let the GPU load it direct from there and effectively "stream" the project/game assets in realtime.

I could see a future Radeon X700+ series and NVIDIA X070+ series of GPUs and their professional equivalents incorporating an option to install an nVME PCIe 4.0 (or 5.0, since that's tech supposedly due late next year or 2022 and expected to last for quite awhile) onto the card, as a way to boost available memory for either professional or gaming purposes. Or maybe Intel could beat the competition to market, using Optane add-ons to their own respective GPUs, acting more like reserve VRAM expansion thanks to higher Read/Write performance than typical NVMe, but less than GDDR/HBM.
I already thought about the Optane thing. AMD could just counter that with a DDR DIMM or LPDDR DIMM combined with a microSD card and doing ram disk backups to the non-volatile storage leveraging a integrated CPU chip which could also do compression/decompression as well which further improves performance. Optane is a fair amount slower by comparison to even the DDR option. Optane is cheaper per gigabyte than DDR, but it doesn't require much DDR to speed up a whole lot of memory is the reality of it you're mostly confined by the interface hence why the Gigabyte I-RAM was kind of a dud on performance relative to what you would have hoped for utilizing the memory and that's also sort of a bit limitation to HDD's on board cache performance as well it too is limited by that same interface protocol it's attached to NVMe is infinately better than SATA III in that regard especially on PCIE 4.0 unfortunately HDD's have pretty near ceased to innovate since SSD's took over on the performance side.
Posted on Reply
#50
Paganstomp
It's another "We are sorry paper launch!" LOL!
Posted on Reply
Add your own comment
Oct 5th, 2024 10:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts