Saturday, September 19th 2020

NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

A GIGABYTE webpage meant for redeeming the RTX 30-series Watch Dogs Legion + GeForce NOW bundle, lists out eligible graphics cards for the offer, including a large selection of those based on unannounced RTX 30-series GPUs. Among these are references to a "GeForce RTX 3060" with 8 GB of memory, and more interestingly, a 20 GB variant of the RTX 3080. The list also confirms the RTX 3070S with 16 GB of memory.

The RTX 3080 launched last week comes with 10 GB of memory across a 320-bit memory interface, using 8 Gbit memory chips, while the RTX 3090 achieves its 24 GB memory amount by piggy-backing two of these chips per 32-bit channel (chips on either side of the PCB). It's conceivable that the the RTX 3080 20 GB will adopt the same method. There exists a vast price-gap between the RTX 3080 10 GB and the RTX 3090, which NVIDIA could look to fill with the 20 GB variant of the RTX 3080. The question on whether you should wait for the 20 GB variant of the RTX 3080 or pick up th 10 GB variant right now, will depend on the performance gap between the RTX 3080 and RTX 3090. We'll answer this question next week.
Source: VideoCardz
Add your own comment

157 Comments on NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

#51
sliderider
InVasManiIt's not listed, but could see a RTX 3060S 16GB being possible as well eventually. That probably depends a lot on the memory controller bandwidth however, but knowing Nvidia if they can get some better binned GDDR chips down the road they'd slap on the extra ram density and faster chips and start selling them to fill whatever gaps can.
I don't see that happening at all. It would be too expensive for the performance class it's in.
Posted on Reply
#52
Seanlighting
I've seen this RAM argument since Unreal Tournament came out. Always get the higher RAM version of the card. GTX 960 reviews all said 4 GB cards were pointless because of the bus and die limitations coupled with most games only needing 2 GB for 1080p gaming. 3 years later the 2 GB card was useless for 1080p anything. Same reason I picked up the 128 meg version of the 8500. "64 megs is more than enough to run any game and it's what the GeForce is using at the same price point."
Posted on Reply
#53
rtwjunkie
PC Gaming Enthusiast
Seanlighting3 years later the 2 GB card was useless for 1080p anything.
3 years later the 960 4GB GM206 chip was useless as well. The 4 GB of VRAM barely helped. I had one just to mess around with on an extra system. It was a nice bang for the buck in it’s first year sure. But 3 years later? Time passed it by.
Posted on Reply
#54
lexluthermiester
rtwjunkie3 years later the 960 4GB GM206 chip was useless as well.
I wouldn't call it useless, but it's certainly not going to be anyone's first choice either. I can think of a few games where the extra 2GB helped. Of course that is going to be true for any 2GB card that had a 4GB variant.
Posted on Reply
#55
R0H1T
SeanlightingI've seen this RAM argument since Unreal Tournament came out. Always get the higher RAM version of the card. GTX 960 reviews all said 4 GB cards were pointless because of the bus and die limitations coupled with most games only needing 2 GB for 1080p gaming. 3 years later the 2 GB card was useless for 1080p anything. Same reason I picked up the 128 meg version of the 8500. "64 megs is more than enough to run any game and it's what the GeForce is using at the same price point."
The flip side to that argument today is that with PCIe 4.0 & top speed NVMe SSDs you probably don't need too much VRAM, with DirectStorage making it's debut, for next gen AAA titles. Which is probably one of the reasons why we see so many VRAM configurations up here, the caveat to that is we need a PCIe 4.0 capable chip i.e. AMD zen2 & above :D
Posted on Reply
#56
Mussels
Freshwater Moderator
BArmsNot sure but I thought the idea was to load textures/animations/model vertex data etc into VRAM where it needs to go anyway.
Ever copied a bunch of small files from one SSD to another? Ever watched an older game take forever to load, despite the fact your SSD can move the entire game directory in seconds?

This is just a software overhaul (directX) with hardware acceleration being added in
Posted on Reply
#57
Jism
SeanlightingI've seen this RAM argument since Unreal Tournament came out. Always get the higher RAM version of the card. GTX 960 reviews all said 4 GB cards were pointless because of the bus and die limitations coupled with most games only needing 2 GB for 1080p gaming. 3 years later the 2 GB card was useless for 1080p anything. Same reason I picked up the 128 meg version of the 8500. "64 megs is more than enough to run any game and it's what the GeForce is using at the same price point."
I disagree. There's visible zero advantage of running a 32MB vs 64MB card or to stay in the realm of 2020, the Rx480 4GB vs RX480 8GB. It wont offer any difference at all really as the games themself dont pull a magic trick to suddenly boost fps with with 50% or so.

Memory consumption of GPU's is primarily due to textures, and textures get 'streamed' from GPU memory sort of say without the game pulling them from the HDD/SSD/RAM which is obviously slower. Side effects like AA to store temporarily frames inside of it etc. Please note that the 4GB vs 8GB version in the RX480/580 is not a good test as the 4GB has slower ram (1750Mhz vs 2000Mhz). But to put it equal the cards would perform the same on the same clocks.

I had a Geforce 2MX 64MB back in the days; compared that to a Geforce 2MX 32MB, zero difference, since the GPU was'nt powerfull enough to run on higher resolutions where the extra ram would be usefull. Even a 4GB fury of AMD still can cope up just fine on everything you throw at it. It's just at some point, that the GPU itself wont be powerfull enough to continue rendering the frames in a desired framerate-span.

www.techpowerup.com/268211/amd-declares-that-the-era-of-4gb-graphics-cards-is-over

They are running tests here on Ultra and it's obvious that big textures require alot of ram. If a GPU cant store any more it's being stored in the RAM of your system, which obviously takes a latency in account when fething that data. My RX580 has 8GB of ram but it's not too powerfull still to run fully maxed out at 2560x1080 ... Even if the game just utilitizes 6GB of 8GB in total. The GPU behind it has to take advantage, otherwise it's another "Oh this one has more GHz/MB/MBPS" decision.

www.vgamuseum.info/index.php/component/k2/item/989-sis-315

This one was the first if i'm correct that loaded up to 128MB on a single VGA card, but was in a way completely useless. The card was'nt even faster then a regular Geforce 2MX. Nvidia released the Gf2mx as well with both 32MB and 64MB, as it pretty much did nothing in regards of gaming.
Posted on Reply
#58
nguyen
Oh boy this VRAM discussions never end.
The GTX 980 with 4GB VRAM is still a capable GPU even today, pretty much match RX 580 8GB in any DX11 games (Maxwell kinda suck at DX12 and Vulkan)

Here is how GTX 980 perform in FS 2020, a game that will fill any VRAM available


Nvidia has had memory compression for ages, Ampere is already the 6th iteration of it (Turing being the 5th).
Here is 2080 Ti using less VRAM than 5700XT and Radeon VII in FS 2020, even though the 2080 Ti is much more powerful than both


At 4K FS2020 will try to eat as much VRAM as possible (>11GB) but that doesn't translate to real performance.
Posted on Reply
#59
HisDivineOrder
R0H1TThe flip side to that argument today is that with PCIe 4.0 & top speed NVMe SSDs you probably don't need too much VRAM, with DirectStorage making it's debut, for next gen AAA titles. Which is probably one of the reasons why we see so many VRAM configurations up here, the caveat to that is we need a PCIe 4.0 capable chip i.e. AMD zen2 & above :D
I guarantee more games will use that extra VRAM much more quickly than start using DirectStorage, especially nvidia's custom add-on to it that requires explicit programming to use fully. Games today use 6-8GB of RAM regularly and they're mostly ports from consoles with 8GB of total system RAM. That 6-8 is what happens when you have the 6 for the GPU and 2 for the system of the consoles plus some change for 4k resolution textures.

This generation, it's going to be consoles with 16GB with probably 4-6GB on the system and 10-12GB-ish for the GPU. When your GPU in your cutting edge PC has less VRAM than or the same (10GB GPU-optimized memory in Xbox Series X) as your console is dedicating to its GPU, then there's a problem because you're going to want expanded headroom to make up for the fact that DirectStorage will not get as much use on PC as it will on Xbox where it's an absolute given. On PC, it's an optional feature for only the latest version of cards and only for users with the absolute fastest NVME drives (of which most people won't have), which means it won't be used for years to come. Look how long RTX adoption took and is taking.

So yeah. Having more VRAM makes your $700 investment last longer. Like 1080 Ti-levels of lasting. Nvidia thinks they can convince enough people to buy more cards with less memory, which is why these early launch cards are going without the extra memory. It'll look fine right now, but by the end of next year, people will start feeling the pinch, but it won't be until 2022 that they'd really notice. If you buy your card and are fine replacing it by the time games are being fully designed for Xbox Series X and PS5 instead of Xbox One X and PS4, then buy the lower memory cards and rock on.

I don't buy a GPU but once in a while, so I'll be waiting for the real cards to come. Don't want to get stuck having to buy a new card sooner than I like because I was impatient.
Posted on Reply
#60
Mussels
Freshwater Moderator
Edit: I'm no w1zzard, but i do have a crapton of GPU's in the house i play around with to stave off boredom. 750Ti 2GB, RX 470 4GB (modded to 580 4GB via BIOS flash), RX 580 8GB, , 780 3GB (RIP, i set it on fire), 970 4GB, 980 4GB, 1070 ti, 1080 8GB on 280mm AIO, and soon a 3080 10GB. Any time i build a new rig, i start with one of the easier to deal with GPU's, do a few benchmarks as stress tests THEN throw the intended GPU in. Saves accidentally killing an important piece of hardware.

In all honesty, i've got a 980 4GB that can still game (at lower settings) on my second 1440p screen (its only 60hz, so its usable) and it also handles VR just fine.
The RX580 8GB? Sure the VRAM doesnt run out, but the GPU cant handle the performance needed for anything that benefits from the 8GB.

There are times when low VRAM cards come out and its just stupid to buy them (1060 3GB, which was a crap amount of ram AND a weaker GPU than the 6GB version) and i wouldnt buy less than 8GB *now* in 2020, but just like CPU cores the genuine requirements are very very slow to creep up - my view is that by the time 10GB is no longer enough (3-4 years?), i'll just upgrade my GPU anyway.

In the case of the RTX 30 series and all this fancy new shit like hardware accelerated streaming from NVME, i think VRAM requirements aren't going to be as harsh for a few years... in supported games you get a speed boost to prevent texture streaming stuttering, and at lower settings they'll be coded for the average user anyway still running a 1050ti 4GB on a basic SATA SSD.

Quite often the higher VRAM versions have drawbacks, either slower VRAM or higher heat/wattage hurting overall performance... for a long, long time the higher VRAM versions were always crippled.
I see no issue with turning textures from ultra to high, and upgrading twice as often (since i can afford to... i dont expect the 3080 20GB to be cost efficient)
Posted on Reply
#61
Jism
The 1060 3GB was specially made for internet cafe's that could get away with 1080p gaming and 60 fps or so. Like the chip was capable of doing that but it woud'nt make sense to place more ram on that card since the GPU cant cope with anything higher then 1080p.

Pretty much any GPU you buy today, wether thats a 4, 8, 10, 12, 16, or even 20GB model; it still works for the way it was intended. Like a 1080p card. or a 4K gaming card. Graphic vendors are'nt stupid. And games that put a load through the ram is mostly due to caching which overall brings like 1 to 3% performance increase rather then streaming this from ram or even disc.

The PS5's SSD is such fast that it could load in less then a second a complete terrain. No need for a GPU with huge chunks of memory anymore. They just portion what they need and stream the rest off SSD. And Nvidia is cheating. Their texture compression is such aggresive that there is a quality difference in both brands, AMD and Nvidia. In my opinion AMD just looks overall better and that might explain why AMD cards tend to be a tad slower then Nvidia's.

Posted on Reply
#62
Mussels
Freshwater Moderator
The PS5 uses these exact same technologies for its fast loading, hardware acccelerated decompression from NVME (custom or not, its still NVME)

PS5, new Xbox and directX all are just using their own branded version of the same concept: accelerate the shit out of decompressing the files, so we can bypass the previous limits given to us by mechanical hard drives
Posted on Reply
#63
rtwjunkie
PC Gaming Enthusiast
HisDivineOrderI guarantee more games will use that extra VRAM much more quickly than start using DirectStorage,
“Will use” is not the same thing as NEEDING it. Just because game devs scheduled VRAM to be filled because it is available is not the same thing as that amount of VRAM being necessary to run the game.

Lazy devs have people so confused and bamboozled, and NVIDIA is more than willing to create cards for this currently perceived need for large amounts of VRAM. They will make lots more profit over this perceived need as well.

By the time these higher than 10-12GB of VRAM are needed, the GPU’s themselves will be obsolete.
Posted on Reply
#64
Mussels
Freshwater Moderator
I mean, if 10GB wasn't enough for 4K gaming, w1zz would have discovered that in his review.

For 8K i can imagine we need more, but these are not 8K cards - the 3090 can scrape by for that with DLSS, but thats really running at a lower res anyway... oh that means my 10GB will be fine for even longer, wooo.


The 3070 16GB will be a joke of a card, the VRAM will totally be pointless unless they use it to speed up loading times in fortnite or some other niche gimmick.
Posted on Reply
#65
jesdals
I like that Nvidia give this info now because its going to mesh with the Scalpers - how hard is it going to be to make that 50+ profit now
Posted on Reply
#66
Vayra86
rbgcExtra memory is always nice to have but it would increase the price of the graphics card, so we need to find the right margins for our shareholders and planned obscolescence is just bonus!
-
FTFY
Old games dont set the norm and this still doesnt explain why two previous gens had more. Didnt that same game exist during the release of a 2080ti? Precisely.

If you expect less than 4-5 years of solid gaming off this hardware you're probably doing it wrong, too.
Posted on Reply
#67
nguyen
Vayra86FTFY
Old games dont set the norm and this still doesnt explain why two previous gens had more. Didnt that same game exist during the release of a 2080ti? Precisely.

If you expect less than 4-5 years of solid gaming off this hardware you're probably doing it wrong, too.
980 Ti was released 5.5 years ago with 6GB VRAM and still very capable of 1440p gaming today.
Sounds like lowering the texture details from Ultra to High is too hard for some people :D, also those exact same people who would complain about performance per dollars too, imagine having useless amount of VRAM would do to the perf/usd...
Posted on Reply
#68
Valantar
HisDivineOrderI guarantee more games will use that extra VRAM much more quickly than start using DirectStorage, especially nvidia's custom add-on to it that requires explicit programming to use fully. Games today use 6-8GB of RAM regularly and they're mostly ports from consoles with 8GB of total system RAM. That 6-8 is what happens when you have the 6 for the GPU and 2 for the system of the consoles plus some change for 4k resolution textures.

This generation, it's going to be consoles with 16GB with probably 4-6GB on the system and 10-12GB-ish for the GPU. When your GPU in your cutting edge PC has less VRAM than or the same (10GB GPU-optimized memory in Xbox Series X) as your console is dedicating to its GPU, then there's a problem because you're going to want expanded headroom to make up for the fact that DirectStorage will not get as much use on PC as it will on Xbox where it's an absolute given. On PC, it's an optional feature for only the latest version of cards and only for users with the absolute fastest NVME drives (of which most people won't have), which means it won't be used for years to come. Look how long RTX adoption took and is taking.

So yeah. Having more VRAM makes your $700 investment last longer. Like 1080 Ti-levels of lasting. Nvidia thinks they can convince enough people to buy more cards with less memory, which is why these early launch cards are going without the extra memory. It'll look fine right now, but by the end of next year, people will start feeling the pinch, but it won't be until 2022 that they'd really notice. If you buy your card and are fine replacing it by the time games are being fully designed for Xbox Series X and PS5 instead of Xbox One X and PS4, then buy the lower memory cards and rock on.

I don't buy a GPU but once in a while, so I'll be waiting for the real cards to come. Don't want to get stuck having to buy a new card sooner than I like because I was impatient.
The Xbox Series X has 16Gb of RAM, of which 2.5GB is reserved for the OS and the remaining 13.5GB is available for software. 10GB of those 13.5 are of the full bandwidth (560GB/s?) variety, with the remaining 3.5GB being slower due to that console's odd two-tiered RAM configuration. That (likely) means that games will at the very most use 10GB of VRAM, though the split between game RAM usage and VRAM is very likely not going to be 3.5:10. Those would be edge cases at the very best. Sony hasn't disclosed this kind of data, but given that the PS5 has a less powerful GPU, it certainly isn't going to need more VRAM than the XSX.

That might be seen as an indication that a more powerful GPU might need more than 10GB of RAM, but then you need to remember that consoles are developed for 5-7-year life cycles, not 3-year ones like PC dGPUs. 10GB on the 3080 is going to be more than enough, even if you use it for more than three years. Besides, VRAM allocation (which is what all software reads as "VRAM use") is not the same as VRAM that's actually being used to render the game. Most games have aggressively opportunistic streaming systems that pre-load data into VRAM in case it's needed. The entire point of DirectStorage is to reduce the amount of unnecessarily loaded data - which then translates to a direct drop in "VRAM use". Sure, it also frees up more VRAM to be actually put to use (say, for even higher resolution textures), but the chances of that becoming a requirement for games in the next few years is essentially zero.

Also, that whole statement about "I guarantee more games will use that extra VRAM much more quickly than start using DirectStorage, especially nvidia's custom add-on to it that requires explicit programming to use fully" does not compute. I mean, DirectStorage is an API, so obviously you need to put in the relevant API calls and program for it for it to work. That's how APIs work, and it has zero to do with "Nvidia's custom add-on to it" - RTX-IO is from all we know a straightforward implementation of DS. Anything else would be pretty stupid of them, given that DS is in the XSX and will as such be in most games made for that platform in the near future, including PC ports. For Nvidia to force additional programming on top of this would make no sense, and it would likely place them at a competitive disadvantage given the likelihood that AMD will be adding DS-compatibility to their new GPUs as well...
nguyen980 Ti was released 5.5 years ago with 6GB VRAM and still very capable of 1440p gaming today.
Sounds like lowering the texture details from Ultra to High is too hard for some people :D, also those exact same people who would complain about performance per dollars too, imagine having useless amount of VRAM would do to the perf/usd...
Some people apparently see it as deeply problematic when a GPU that could otherwise deliver a cinematic ~24fps instead delivers 10 due to a VRAM limitation. Oh, I know, there have been cases where the FPS could have been higher - even in playable territories - if it wasn't for the low amount of VRAM, but that's exceedingly rare. In the vast majority of cases, VRAM limitations kick in at a point when the GPU is already delivering sub-par performance and you need to lower settings anyway. But apparently that's hard for people to accept, as you say. More VRAM has for a decade or so been the no-benefit upsell of the GPU business. It's really about time people started seeing through that crap.
Posted on Reply
#69
Vayra86
nguyen980 Ti was released 5.5 years ago with 6GB VRAM and still very capable of 1440p gaming today.
Sounds like lowering the texture details from Ultra to High is too hard for some people :D, also those exact same people who would complain about performance per dollars too, imagine having useless amount of VRAM would do to the perf/usd...
Yes, there are many, many examples of well balanced GPUs for their time. There are also examples of those that aren't as well balanced. A slew of 2GB GPUs is among them, on the edge of the 970 releasing with 4GB. And even that was problematic, Nvidia needed a few driver updates to funnel most activity towards the 'faster' 3.5GB as some examples at the time showed with early Maxwell drivers. The 980 had no such issues.

To stick with that time frame, the Fury X (4GB) today is a great example - despite its super fast HBM and bandwidth - that sucks monkey balls in VRAM intensive games today. The card has lost performance over time. Substantially. And it performs relatively worse than 980ti does today and worse than it did at time of launch.

The same thing is what I am seeing here. We're looking at a very good perf jump this gen, while the VRAM is all over the place. Turing already cut some things back, and we remained not only stagnant, but actually lost VRAM relative to raw performance this gen. I'm not sure about your crystal ball, but mine is showing me a similar 2GB > 4GB generational jump where the lower VRAM cards are going to fall off faster. That specific jump also made us realize the 7970 with its 3GB was pretty damn future proof for its days, remember... A 10GB card with performance well above the 2080ti, especially at 4K, should simply be having more than that. I don't care what a PR dude from Nvidia thinks about it. I do care about the actual specs of actual consoles launching and doing pseudo-4K with lots of internal scaling. The PC will be doing more than that, but makes do with slightly less VRAM? I don't follow that logic and I'm not buying it either - especially not at a price tag of 700+.

To each his own. Let's revisit this in 4-5 years time. All of this is even without considering the vast amount of other stuff people can do on a PC with their GPUs to tax VRAM further, within the same performance bracket: most notably modding. Texture mods; adding assets and effects will rapidly eat up those precious GBs. I like my GPUs capable to have that freedom.
Valantarconsoles are developed for 5-7-year life cycles, not 3-year ones like PC dGPUs.
There you have it. Who decided PC dGPU is developed for a 3 year life cycle? It certainly wasn't us. They last double that time without any hiccups whatsoever, and even then hold resale value. Especially the high end. I'll take 4-5 if you don't mind. The 1080 I'm running now, makes 4-5 just fine, and then some. The 780ti I had prior, did similar, and they both had life in them still.

Two GPU upgrades per console gen is utterly ridiculous and unnecessary, since we all know the real jumps happen with real console gen updates.
Posted on Reply
#70
Parn
3070S with 16GB and fully enabled GA104 would be more interesting to me as long as its price remains the same as the vanilla 3070.
Posted on Reply
#71
Vya Domus
With every generation of consoles memory requirements ballooned, it happened with PS3/360, it also happened with PS4/Xbox One. Why are people convinced that it's not going to happen this time around is beyond me.

Nvidia's marketing is top notch, of course they would never admit that VRAM is going to be a limitation in some regards and provide sensible arguments. But then again, they were also the same people who told everyone that they believe unified shaders are not the future for high performance GPUs back when ATI introduced them for the first time as well.
nguyenHere is how GTX 980 perform in FS 2020, a game that will fill any VRAM available
It wont do that on 1080p, nice diversion.



The 980 or any other 4GB card isn't even included, you can guess why. You should have also read what they said :
1080p ultra is the last setting where it even remotely made sense to test all of Nvidia's x80 model graphics cards, so let's do a quick look at generational performance again. The GTX 780 struggles mightily, only putting up 16 fps. GTX 980 is also held back by its limited VRAM, delivering 27% more performance than the 780 but still only reaching 21 fps. The jump to the 1080 with 8GB is much larger now: nearly double the performance (89% faster) at 39 fps.
The evidence that VRAM is a real limiting factor is everywhere, you just have to stop truing a blind eye to it.
Posted on Reply
#72
londiste
Vya DomusWith every generation of consoles memory requirements ballooned, it happened with PS3/360, it also happened with PS4/Xbox One. Why are people convinced that it's not going to happen this time around is beyond me.
Ballooned? This generation is decidedly different when it comes to ballooning memory size.
- Xbox: 64 MB > Xbox 360: 512MB > Xbox One/X: 8/12GB > Xbox Series S/X: 10/16GB (8/10GB fast RAM)
- PS: 2+1MB > PS2: 32+4MB > PS3: 256+256MB > PS4: 8GB (+256MB/1GB) > PS5: 16GB
In percentages:
+700% > +1500% > +25%
+1100% > +1322% > +1550% > +93%
Posted on Reply
#73
Mussels
Freshwater Moderator
Damnit all this talk of VRAM compression makes me wanna ask w1zzard to do a generational testing with the biggest VRAM card of each gen, once we have big navi and the 3090 out to see

1. How much gen on gen improvement there is in each camp
2. how much VRAM he can eat out of a 24GB card
3. what it takes to finally make him cry
Posted on Reply
#74
Dyatlov A
InVasManiIt's not listed, but could see a RTX 3060S 16GB being possible as well eventually. That probably depends a lot on the memory controller bandwidth however, but knowing Nvidia if they can get some better binned GDDR chips down the road they'd slap on the extra ram density and faster chips and start selling them to fill whatever gaps can.
why would you need more than 10GB memory?
Posted on Reply
#75
nguyen
Vya DomusWith every generation of consoles memory requirements ballooned, it happened with PS3/360, it also happened with PS4/Xbox One. Why are people convinced that it's not going to happen this time around is beyond me.

Nvidia's marketing is top notch, of course they would never admit that VRAM is going to be a limitation in some regards and provide sensible arguments. But then again, they were also the same people who told everyone that they believe unified shaders are not the future for high performance GPUs back when ATI introduced them for the first time as well.
It wont do that on 1080p, nice diversion.

The 980 or any other 4GB card isn't even included, you can guess why. You should have also read what they said :
The evidence that VRAM is a real limiting factor is everywhere, you just have to stop truing a blind eye to it.
And somehow lowering the detail from Ultra to High is too hard for you ?
Also you can see 2060 6GB being faster than 1080 8GB there, even at 4K Ultra. Nvidia improves the memory compression algorithm every generation that 8GB VRAM on Ampere does not act the same way as 8GB VRAM on Turing or Pascal (AMD is even further off).


Just look at the VRAM usage between 2080 Ti vs 3080, the 3080 always use less VRAM, that how Nvidia memory compression works...

I would rather have a hypothetical 3080 Ti with 12GB VRAM on 384 bit bus rather than 20GB VRAM on 320bit bus, bandwidth over useless capacity anyday. At least higher VRAM bandwidth will instantly give higher performance on today games, not 5 years down the line when these 3080 can be had for 200usd...
Posted on Reply
Add your own comment
Oct 5th, 2024 09:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts