Saturday, September 19th 2020

NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

A GIGABYTE webpage meant for redeeming the RTX 30-series Watch Dogs Legion + GeForce NOW bundle, lists out eligible graphics cards for the offer, including a large selection of those based on unannounced RTX 30-series GPUs. Among these are references to a "GeForce RTX 3060" with 8 GB of memory, and more interestingly, a 20 GB variant of the RTX 3080. The list also confirms the RTX 3070S with 16 GB of memory.

The RTX 3080 launched last week comes with 10 GB of memory across a 320-bit memory interface, using 8 Gbit memory chips, while the RTX 3090 achieves its 24 GB memory amount by piggy-backing two of these chips per 32-bit channel (chips on either side of the PCB). It's conceivable that the the RTX 3080 20 GB will adopt the same method. There exists a vast price-gap between the RTX 3080 10 GB and the RTX 3090, which NVIDIA could look to fill with the 20 GB variant of the RTX 3080. The question on whether you should wait for the 20 GB variant of the RTX 3080 or pick up th 10 GB variant right now, will depend on the performance gap between the RTX 3080 and RTX 3090. We'll answer this question next week.
Source: VideoCardz
Add your own comment

157 Comments on NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

#76
basco
would really like to know a gtx 780 3gb versus 6gb outcome.
i know its old tech but i think if i can get a cheap 6gb card .......
Posted on Reply
#77
Vya Domus
nguyenAnd somehow lowering the detail from Ultra to High is too hard for you ?
On a mid range card or a high end one from 6 years ago ? No, it wouldn't be. Having to do that with a high end card that you bought today a year or two from now, it would be kind of pathetic.
nguyenI would rather have a hypothetical 3080 Ti with 12GB VRAM on 384 bit bus rather than 20GB VRAM on 320bit bus, bandwidth over useless capacity anyday.
Wouldn't you rather stick with the 10GB ? I get the sense that all this extra VRAM would be useless, so why would you want something with more memory ?
nguyenJust look at the VRAM usage between 2080 Ti vs 3080, the 3080 always use less VRAM, that how Nvidia memory compression works...
That's not how memory compression works at all, the memory that you see there is allocated memory. The compression takes place internally onto the GPU on some level, in other words the 1 GB that was allocated has to remain visible and be addressed as 1GB at all times otherwise it would break the application. So the effects of the compression are not visible to the outside application. The compression is only used on color data anyway, which is only a portion of what you need to render a scene so a lot of the data isn't actually compressible.

That being said the reason more memory is used on the 2080ti is because the more memory available the more allocations are going to take place. Similar to how Windows is going to report higher memory usage when more RAM is available. Memory allocation requests are queued up and may not take place at the exact time they are issued by the application.
Posted on Reply
#78
ppn
3080 doesnt have 5 years before it can be had for $200. GTX 780 3GB was humiliated by GTX 970 only 500 days later not to mention 780Ti that didnt last for 300 days before getting slashed in half.

even if Nvidia released 3080 with 384bit bus I still wouldn't buy it. save the money for later when we can have a decent 1008TBs 12GB card on 6nm EUV that clocks 30% higher.
Posted on Reply
#79
AusWolf
ValantarThe Xbox Series X has 16Gb of RAM, of which 2.5GB is reserved for the OS and the remaining 13.5GB is available for software. 10GB of those 13.5 are of the full bandwidth (560GB/s?) variety, with the remaining 3.5GB being slower due to that console's odd two-tiered RAM configuration. That (likely) means that games will at the very most use 10GB of VRAM, though the split between game RAM usage and VRAM is very likely not going to be 3.5:10. Those would be edge cases at the very best. Sony hasn't disclosed this kind of data, but given that the PS5 has a less powerful GPU, it certainly isn't going to need more VRAM than the XSX.
Comparing consoles to PCs is like apples and pears. For one: consoles use software totally differently than PCs, they often come with locked resolutions and frame rates, etc. For two: no console's performance has ever risen above that of a mid-range PC from the same era, and I don't think it's ever gonna change (mostly due to price and size/cooling restrictions). Devs tweak game settings to accommodate to hardware specifications on consoles, whereas on PC, you have the freedom to use whatever settings you like.

Example: playing The Witcher 3 on an Xbox One with washed out textures at 900p is not the same as playing it on a similarly aged high-end PC with ultra settings at 1080p or 1440p.
Posted on Reply
#80
nguyen
Vya DomusOn a mid range card or a high end one from 6 years ago ? No, it wouldn't be. Having to do that with a high end card that you bought today a year or two from now, it would be kind of pathetic.
Wouldn't you rather stick with the 10GB ? I get the sense that all this extra VRAM would be useless, so why would you want something with more memory ?
For 100-200usd more I would get a hypothetical 3080 Ti with 12GB VRAM on 384bit bus, not the 3080 with 20GB . However this 3080 Ti would jeopardize 3090 sale so I'm guessing Nvidia would release them much later.

The 3080 PCB already has the space for 2 extra VRAM modules


I had to lower some details on pretty much top of the line GPUs like Titan X Maxwell, 1080 Ti and 2080 Ti during the first year of owning them, not even at 4K. Ultra details setting is pretty much only used for benchmarks, IRL people tend to prefer higher FPS rather than some IQ you can't distinguish unless zooming 4x into a recording LOL.


The new norm for 2021 onward should be RT Reflections + High Details, let just hope so :D
Posted on Reply
#81
londiste
AusWolfFor one: consoles use software totally differently than PCs, they often come with locked resolutions and frame rates, etc. For two: no console's performance has ever risen above that of a mid-range PC from the same era, and I don't think it's ever gonna change (mostly due to price and size/cooling restrictions). Devs tweak game settings to accommodate to hardware specifications on consoles, whereas on PC, you have the freedom to use whatever settings you like.
That used to be true maybe even 2 generations ago but today consoles practically are PC's. x86 CPUs, GPUs on mainstream architectures, common buses, RAM etc. XB1/XBX and upcoming XBSX/XBSS are running on Windows kernel and DirectX APIs, Sony is running FreeBSD-based OS with partially custom APIs. If there were not artificial restrictions to keep the console garden walls in place, me and you could run all that on our hardware. Resolutions and frame rates, as well as the game setting are up to the developer and only benefit for consoles is the predefined spec - optimizing game to a specific machine or two is much much easier than creating the low-medium-high-ultra settings that work well across swaths of different hardware.

Console performances have been above midrange PC of the same era in the past - PS3/Xbox360 were at the level of (very) high-end PC at launch. PS4/XB1 were (or at least seemed to be) an outlier with decidedly mediocre hardware. XBSX/PS5 are at the level of high-end hardware as of half a year ago when their specs were finalized which is back to normal. XBSS is kind of the weird red-headed stepchild that gets bare minimum and we'll have to see how it fares. PC hardware performance ceiling has been climbing for a while now. Yes, high-end stuff is getting incredibly expensive and gets worse and worse in terms of bang-per-buck but it is getting relatively more and more powerful as well.
Posted on Reply
#82
TumbleGeorge
Consoles...
Unreal Engine 5? Next DX "superultimate"? 8k; Nvidia Hopper; AMD RDNA3/4; GDDR7; HBM3(+)/4...to get your money!
Has many reasons to put RTX 30xx with "small" or "big" VRAM size in history. Regardless of whether and to what extent they are justified.
Posted on Reply
#83
Valantar
AusWolfComparing consoles to PCs is like apples and pears. For one: consoles use software totally differently than PCs, they often come with locked resolutions and frame rates, etc. For two: no console's performance has ever risen above that of a mid-range PC from the same era, and I don't think it's ever gonna change (mostly due to price and size/cooling restrictions). Devs tweak game settings to accommodate to hardware specifications on consoles, whereas on PC, you have the freedom to use whatever settings you like.

Example: playing The Witcher 3 on an Xbox One with washed out textures at 900p is not the same as playing it on a similarly aged high-end PC with ultra settings at 1080p or 1440p.
I wasn't the one starting with the console-to-PC comparisons here, I was just responding. Besides, comparing the upcoming consoles to previous generations is ... tricky. Not only are they architecturally more similar to PCs than ever (true, the current generation are also X86+AMD GPU, but nobody ever gamed on a Jaguar CPU), but unlike last time both console vendors are investing significantly in hardware (the PS4 at launch performed about the same as a lower midrange dGPU, at the time a ~$150 product; upcoming consoles will likely perform similarly to the 2070S (and probably 3060), a $500 product). No previous console's performance has risen above a mid-range PC from the same era, but the upcoming ones are definitely trying to change that. Besides that, did I ever say that playing a game at on a console at one resolution was the same as playing the same game on a PC at a different one? Nice straw man you've got there. Stop putting words in my mouth. The point of making console comparisons this time around is that there has never been a closer match between consoles and PCs in terms of actual performance than what we will have this November.
Vayra86There you have it. Who decided PC dGPU is developed for a 3 year life cycle? It certainly wasn't us. They last double that time without any hiccups whatsoever, and even then hold resale value. Especially the high end. I'll take 4-5 if you don't mind. The 1080 I'm running now, makes 4-5 just fine, and then some. The 780ti I had prior, did similar, and they both had life in them still.

Two GPU upgrades per console gen is utterly ridiculous and unnecessary, since we all know the real jumps happen with real console gen updates.
Well, don't blame me, I didn't make GPU vendors think like this. Heck, I'm still happily using my Fury X (though it is starting to struggle enough at 1440p that it's time to retire it, but I'm very happy with its five-year lifespan). I would say this stems from being in a highly competitive market that has historically had dramatic shifts in performance in relatively short time spans, combined with relatively open software ecosystems (the latter of which consoles have avoided, thus avoiding the short term one-upmanship of the GPU market). That makes software a constantly moving target, and thus we get the chicken-and-egg-like race of more demanding games requiring faster GPUs allowing for even more demanding games requiring even faster GPUs, etc., etc. Of course, as the industry matures those time spans are growing, and given slowdowns in both architectural improvements and manufacturing nodes any high end GPU from today is likely to remain relevant for at least five years, though likely longer, and at a far higher quality of life for users than previous ones.

That being said, future-proofing by adding more VRAM is a poor solution. Sure, the GPU needs to have enough VRAM, there is no question about that. It needs an amount of VRAM and a bandwidth that both complement the computational power of the GPU, otherwise it will quickly become bottlenecked. The issue is that VRAM density and bandwidth both scale (very!) poorly with time - heck, base DRAM die clocks haven't really moved for a decade or more, with only more advanced ways of packing more data into the same signal increasing transfer speeds. But adding more RAM is very expensive too - heck, the huge framebuffers of current GPUs are a lot of the reason for high end GPUs today being $700-1500 rather than the $200-400 of a decade ago - die density has just barely moved, bandwidth is still an issue, requiring more complex and expensive PCBs, etc. I would be quite surprised if the BOM cost of the VRAM on the 3080 is below $200. Which then begs the question: would it really be worth it to pay $1000 for a 3080 20GB rather than $700 for a 3080 10GB, when there would be no perceptible performance difference for the vast majority of its useful lifetime?

The last question is particularly relevant when you start to look at VRAM usage scaling and comparing the memory sizes in question to previous generations where buying the highest VRAM SKU has been smart. Remember, scaling on the same with bus is either 1x or 2x (or potentially 4x I guess), so like we're discussing here, the only possible step is to 20GB - which brings with it a very significant cost increase. The base SKU has 10GB, which is the second highest memory count of any consumer-facing GPU in history. Even if it's comparable to the likely GPU-allocated amount of RAM on upcoming consoles, it's still a very decent chunk. On the other hand, previous GPUs with different memory capacities have started out much lower - 3/6GB for the 1060, 4/8GB for a whole host of others, and 2/4GB for quite a few if you look far enough back. The thing here is: while the percentage increases are always the same, the absolute amount of VRAM now is massively higher than in those cases - the baseline we're currently talking about is higher than the high end option of the previous comparisons. What does that mean? For one, you're already operating at such a high level of performance that there's a lot of leeway for tuning and optimization. If a game requires 1GB more VRAM than what's available, lowering settings to fit that within a 10GB framebuffer will be trivial. Doing the same on a 3GB card? Pretty much impossible. A 2GB reduction in VRAM needs is likely more easily done on a 10GB framebuffer than a .5GB reduction on a 3GB framebuffer. After all, there is a baseline requirement that is necessary for the game to run, onto which additional quality options add more. Raising the ceiling for maximum VRAM doesn't as much shift the baseline requirement upwards (though that too creeps upwards over time) as it expands the range of possible working configurations. Sure, 2GB is largely insufficient for 1080p today, but 3GB is still fine, and 4GB is plenty (at settings where GPUs with these amounts of VRAM would actually be able to deliver playable framerates). So you previously had a scale from, say, .5-4GB, then 2-8GB, and in the future maybe 4-12GB. Again, looking at percentages is misleading, as it takes a lot of work to fill those last few GB. And the higher you go, the easier it is to ease off on a setting or two without perceptibly losing quality. I.e. your experience will not change whatsoever, except that the game will (likely automatically) lower a couple of settings a single notch.

Of course, in the time it will take for 10GB to become a real limitation at 4k - I would say at minimum three years - the 3080 will likely not have the shader performance to keep up anyhow, making the entire question moot. Lowering settings will thus become a necessity no matter the VRAM amount.

So, what will you then be paying for with a 3080 20GB? Likely 8GB of VRAM that will never see practical use (again, it will more than likely have stuff allocated to it, but it won't be used in gameplay), and the luxury of keeping a couple of settings pegged to the max rather than lowering them imperceptibly. That might be worth it to you, but it certainly isn't for me. In fact, I'd say it's a complete waste of money.
Posted on Reply
#84
AusWolf
londisteThat used to be true maybe even 2 generations ago but today consoles practically are PC's. x86 CPUs, GPUs on mainstream architectures, common buses, RAM etc. XB1/XBX and upcoming XBSX/XBSS are running on Windows kernel and DirectX APIs, Sony is running FreeBSD-based OS with partially custom APIs. If there were not artificial restrictions to keep the console garden walls in place, me and you could run all that on our hardware. Resolutions and frame rates, as well as the game setting are up to the developer and only benefit for consoles is the predefined spec - optimizing game to a specific machine or two is much much easier than creating the low-medium-high-ultra settings that work well across swaths of different hardware.
This is true.
londisteConsole performances have been above midrange PC of the same era in the past - PS3/Xbox360 were at the level of (very) high-end PC at launch. PS4/XB1 were (or at least seemed to be) an outlier with decidedly mediocre hardware. XBSX/PS5 are at the level of high-end hardware as of half a year ago when their specs were finalized which is back to normal. XBSS is kind of the weird red-headed stepchild that gets bare minimum and we'll have to see how it fares. PC hardware performance ceiling has been climbing for a while now. Yes, high-end stuff is getting incredibly expensive and gets worse and worse in terms of bang-per-buck but it is getting relatively more and more powerful as well.
I disagree. To stay with my original example, The Witcher 3 (a game from 2015) ran on the Xbox One (a late 2013 machine) with reduced quality settings at 900p to keep frame rates acceptable. Back in that time, I had an AMD FX-8150 (no need to mention how bad that CPU was for gaming) and a Radeon HD 7970 which was the flagship AMD card in early 2012. I could still run the game at 1080p with ultra settings (except for hairworks) between 40-60 FPS depending on the scene.

It's true that consoles get more and more powerful with every generation, but so do PCs, and I honestly can't see a $500 machine beat a $2,000 one ever. And thus, game devs will always have to impose limitations to make games run as smoothly on consoles as they do on high-end PCs, making the two basically incomparable, even in terms of VRAM requirements.
Posted on Reply
#85
londiste
ValantarNo previous console's performance has risen above a mid-range PC from the same era, but the upcoming ones are definitely trying to change that.
Like I said, PS4/XB1 generation seems to be a fluke. We only need to look at the generation before that:
- PS3's (Nov 2006) RSX is basically hybrid of 7800GTX (June 2005, $600) and 7900GTX (March 2006, $500).
- XBox360's (Nov 2005) Xenos is X1800/X1900 hybrid with X1800XL (Oct 2005, $450) probably the closest match.

While more difficult to compare, CPUs were high-end as well. Athlon 64 X2s came out in mid-2005. PowerPC-based CPUs in both consoles were pretty nicely multithreaded before that was a mainstream thing for PCs.
Posted on Reply
#86
ppn
3080 20G has 1536 more shaders and possibly SLi, not the same card at all. 10G is just perfect if you find the right balance that varies game to game. But can't under volt it 0.95 at stock 1960, overclock is nonexistent. Twice as fast as 2070 at twice the power, powerhog. Fastest card, but if 2070 barely can do 1440/60, no way 3080 is 4k 120, not possible, low texture esport games only.
Posted on Reply
#87
AusWolf
ValantarI wasn't the one starting with the console-to-PC comparisons here, I was just responding. Besides, comparing the upcoming consoles to previous generations is ... tricky. Not only are they architecturally more similar to PCs than ever (true, the current generation are also X86+AMD GPU, but nobody ever gamed on a Jaguar CPU), but unlike last time both console vendors are investing significantly in hardware (the PS4 at launch performed about the same as a lower midrange dGPU, at the time a ~$150 product; upcoming consoles will likely perform similarly to the 2070S (and probably 3060), a $500 product). No previous console's performance has risen above a mid-range PC from the same era, but the upcoming ones are definitely trying to change that. Besides that, did I ever say that playing a game at on a console at one resolution was the same as playing the same game on a PC at a different one? Nice straw man you've got there. Stop putting words in my mouth. The point of making console comparisons this time around is that there has never been a closer match between consoles and PCs in terms of actual performance than what we will have this November.
I'm not putting words into your mouth, I know you weren't the one starting this train of thought. ;)

All I tried to say is, there's no point in drawing conclusions regarding VRAM requirements on PC based on how much RAM the newest Xbox and Playstation have (regardless of who started the conversation). Game devs can always tweak settings to make games playable on consoles, while on PC you have the freedom to choose from an array of graphics settings to suit your needs.
Posted on Reply
#88
Legacy-ZA
I hope the 3060Ti will have at least 12GB VRAM. :)
Posted on Reply
#89
londiste
AusWolfAll I tried to say is, there's no point in drawing conclusions regarding VRAM requirements on PC based on how much RAM the newest Xbox and Playstation have (regardless of who started the conversation). Game devs can always tweak settings to make games playable on consoles, while on PC you have the freedom to choose from an array of graphics settings to suit your needs.
VRAM requirement today depends most notably on one thing - texture pool, usually exposed as texture quality setting. Game assets - such as textures - are normally the same (or close enough) across different platforms. Texture pool nowadays is almost always dynamic, there is an allocated bunch of VRAM where textures are constantly loaded into and unloaded based on whatever scheme dev deemed good enough.
Posted on Reply
#90
Vayra86
londisteVRAM requirement today depends most notably on one thing - texture pool, usually exposed as texture quality setting. Game assets - such as textures - are normally the same (or close enough) across different platforms. Texture pool nowadays is almost always dynamic, there is an allocated bunch of VRAM where textures are constantly loaded into and unloaded based on whatever scheme dev deemed good enough.
Its a bit like how World of Warcraft evolved. No matter how silly you play, everyone can feel like their stuff runs like a boss and call it the real thing.
Posted on Reply
#91
efikkan
MusselsDamnit all this talk of VRAM compression makes me wanna ask w1zzard to do a generational testing with the biggest VRAM card of each gen, once we have big navi and the 3090 out to see

1. How much gen on gen improvement there is in each camp
2. how much VRAM he can eat out of a 24GB card
3. what it takes to finally make him cry
Be aware that the VRAM usage reported by the driver doesn't tell you everything, as the actual memory usage can vary a lot during a single frame. Temporary buffers are used a lot during multiple render passes, and are usually allocated once, but flushed and compressed during a single frame. Also, be aware that some games may allocate a lot of extra VRAM without it being strictly necessary.

The best way to find out whether you have enough VRAM or not is to find the spot where you can measure symptoms of insufficient VRAM, primarily by measuring frame time consistency. Insufficient VRAM will usually cause significant stutter and sometimes even glitching.
Posted on Reply
#92
lexluthermiester
Vya DomusHaving to do that with a high end card that you bought today a year or two from now, it would be kind of pathetic.
No, it wouldn't. Software devs often create titles that push the performance envelope further than is currently viable. Crysis anyone? At a time when 1080p was becoming the standard, they released a game that would only run well if you ran at 720p with settings turned down and that was on brand new, top shelf hardware. Even the very next generation of hardware struggled to run Crysis at 1080p@60hz. Things have not changed. This simple and universal rule is: If you're not getting the performance you desire, adjust your settings down until performance is acceptable to you. Everyone has a different idea of what "acceptable" actually is.

As for the debate that keeps repeating itself with every new generation of hardware, RAM. Folks, just because @W1zzard did not encounter any issues with 10GB of VRAM in the testing done at this time does not mean that 10GB will remain enough in the future(near or otherwise). Technology always progresses. And software often precedes hardware. Therefore, if you buy a card now that has a certain amount of RAM on it and you just go with bog standard you may find yourself coming up short in performance later. 10GB seems like a lot now. But then so did 2GB just a few years ago. Shortsighted planning always ends poorly. Just ask the people that famously said "No one will ever need more that 640k of RAM."

A 3060, 3070 or 3080 with 12GB, 16GB or 20GB of VRAM is not a waste. It is an option that offers a level of future proofing. For most people that is an important factor because they only ever buy a new GPU every 3 or 4 years. They need all the future-proofing they can get. And before anyone says "Future-proofing is a pipe-dream and a waste of time.", put a cork in that pie-hole. Future-proofing a PC purchase by making strategic choices of hardware is a long practiced and well honored buying methodology.
Posted on Reply
#93
ppn
12, 16, 20GB are not going to make sense.
20GB is $300 more expensive, for what. so that at unclear point in the future, when it would be obsolete anyway, so you could be granted one more game that runs a little better.
16GB price is very close to the 10GB card so you have to trade 30% less performance for 6GB more. I mean you are deliberately going to make a worse choice again because of some future uncertainty.
12GB only AMD will make a 192 bit card at this point. 3060 will be as weak as 2070. so why put 12GB on it.
Posted on Reply
#94
lexluthermiester
ppn20GB is $300 more expensive
ppn16GB price is very close to the 10GB card
Rubbish! You don't and can't know any of that. Exact card models and prices have not been announced. Additionally, history shows those conclusions have no merit.

Example? The GTX770. The 2GB version is pretty much unusable for modern games, but the 4GB versions are still relevant as they are still playable because of the additional VRAM. Even though the GPU dies are the same, the extra VRAM make all the difference. And it made a big difference then too. The cost? A mere $35 extra over the 2GB version. The same has been true throughout the history of GPU's regardless of who made them.
Posted on Reply
#95
Nkd
Ofcourse they will. But people seem to forget and not realize that its already a known fact the 2gb chips won't be available until 2021. So we probably won't see these cards until January or December the earliest if they rush them.
Posted on Reply
#96
moproblems99
Vayra86Didnt that same game exist during the release of a 2080ti?
How much did the 2080 have that this is a direct replacement for though? Really gained vram this gen.
Posted on Reply
#97
Minus Infinity
Big Navi not even released and they are panicking already. Would have thought this would be part of the mid-life update. Anyway 10Gb or 20GB you won't be getting one soon.
Posted on Reply
#98
Valantar
lexluthermiesterA 3060, 3070 or 3080 with 12GB, 16GB or 20GB of VRAM is not a waste. It is an option that offers a level of future proofing. For most people that is an important factor because they only ever buy a new GPU every 3 or 4 years. They need all the future-proofing they can get. And before anyone says "Future-proofing is a pipe-dream and a waste of time.", put a cork in that pie-hole. Future-proofing a PC purchase by making strategic choices of hardware is a long practiced and well honored buying methodology.
That isn't really the question though - the question is whether the amount of VRAM will become a noticeable bottleneck in cases where shader performance isn't. And that, even though this card is obviously a beast, is quite unlikely. Compute requirements typically increase faster than VRAM requirements (if for no other reason that the amount of VRAM on common GPUs increases very slowly, forcing developers to keep VRAM usage somewhat reasonable), so this GPU is far more likely to be bottlenecked by its core and architecture rather than having "only" 10GB of VRAM. So you'll be forced to lower settings for reasons other than running out of VRAM in most situations. And, as I said above, with a VRAM pool that large, you have massive room for adjusting a couple of settings down to stay within the limitations of the framebuffer should such a situation occur.

Your comparison to the 770 is as such a false equivalency: that comparison must then also assume that GPUs in the (relatively near) future will have >10GB of VRAM as a minimum, as that is what would be required for this amount to truly become a bottleneck. The modern titles you speak of need >2/<4 GB of VRAM to run smoothly at 1080p. Even the lowest end GPUs today come in 4GB SKUs, and two generations back, while you did have 2 and 3GB low-end options, nearly everything even then was 4GB or more. For your comparison to be valid, the same situation must then be true in the relatively near future, only 2GB gets replaced with 10GB. And that isn't happening. Baseline requirements for games are not going to exceed 10GB of VRAM in any reasonably relevant future. VRAM is simply too expensive for that - it would make the cheapest GPUs around cost $500 or more - DRAM bit pricing isn't budging. Not to mention that the VRAM creep past 2GB has taken years. To expect a sudden jump of, say, >3x (assuming ~3GB today) in a few years? That would be an extremely dramatic change compared to the only relevant history we have to compare to.

Besides, you (and many others here) seem to be mixing up two similar but still different questions:
a) Will 10GB be enough to not bottleneck the rest of this GPU during its usable lifespan? (i.e. "can it run Ultra settings until I upgrade?") and
b) Will 10GB be enough to not make this GPU unusable in 2-3 generations? (i.e. "will this be a dud in X years?")

Question a) is at least worth discussing, and I would say "maybe not, but it's a limitation that can be easily overcome by changing a few settings (and gaming at Ultra is kind of silly anyhow), and at those settings you'll likely encounter other limitations beyond VRAM". Question b), which is what you are alluding to with your GTX 770 reference, is pretty much out of the question, as baseline requirements (i.e. "can it run games at all?") aren't going to exceed 10GB in the next decade no matter what. Will you be able to play at 4k medium-high settings at reasonable frame rates with 10GB of VRAM in a decade? No - that would be unprecedented. But will you be able to play 1080p or 1440p at those types of settings with 10GB of VRAM? Almost undoubtedly (though shader performance is likely to force you to lower settings - but not the VRAM). And if you're expecting future-proofing to keep your GPU relevant at that kind of performance level for that kind of time, your expectations of what is possible is fundamentally flawed. The other parts of the GPU will be holding you back far more than the VRAM in that scenario. If the size of your framebuffer is making your five-year-old high-end GPU run at a spiky 10fps instead of, say, 34, does that matter at all? Unless the game in question is extremely slow-paced, you'd need to lower settings anyhow to get a reasonable framerate, which will then in all likelihood bring you below the 10GB limitation.

I'm all for future-proofing, and I absolutely hate the shortsighted hypermaterialism of the PC building scene - there's a reason I've kept my current GPU for five years - but adding $2-300 (high end GDDR chips cost somewhere in the realm of $20/GB) to the cost of a part to add something that in all likelihood won't add to its longevity at all is not smart future-proofing. If you're paying that to avoid one bottleneck just to be held back by another, you've overpaid for an unbalanced product.
Posted on Reply
#99
Vayra86
ValantarThat isn't really the question though - the question is whether the amount of VRAM will become a noticeable bottleneck in cases where shader performance isn't. And that, even though this card is obviously a beast, is quite unlikely. Compute requirements typically increase faster than VRAM requirements (if for no other reason that the amount of VRAM on common GPUs increases very slowly, forcing developers to keep VRAM usage somewhat reasonable), so this GPU is far more likely to be bottlenecked by its core and architecture rather than having "only" 10GB of VRAM. So you'll be forced to lower settings for reasons other than running out of VRAM in most situations. And, as I said above, with a VRAM pool that large, you have massive room for adjusting a couple of settings down to stay within the limitations of the framebuffer should such a situation occur.

Your comparison to the 770 is as such a false equivalency: that comparison must then also assume that GPUs in the (relatively near) future will have >10GB of VRAM as a minimum, as that is what would be required for this amount to truly become a bottleneck. The modern titles you speak of need >2/<4 GB of VRAM to run smoothly at 1080p. Even the lowest end GPUs today come in 4GB SKUs, and two generations back, while you did have 2 and 3GB low-end options, nearly everything even then was 4GB or more. For your comparison to be valid, the same situation must then be true in the relatively near future, only 2GB gets replaced with 10GB. And that isn't happening. Baseline requirements for games are not going to exceed 10GB of VRAM in any reasonably relevant future. VRAM is simply too expensive for that - it would make the cheapest GPUs around cost $500 or more - DRAM bit pricing isn't budging. Not to mention that the VRAM creep past 2GB has taken years. To expect a sudden jump of, say, >3x (assuming ~3GB today) in a few years? That would be an extremely dramatic change compared to the only relevant history we have to compare to.

Besides, you (and many others here) seem to be mixing up two similar but still different questions:
a) Will 10GB be enough to not bottleneck the rest of this GPU during its usable lifespan? (i.e. "can it run Ultra settings until I upgrade?") and
b) Will 10GB be enough to not make this GPU unusable in 2-3 generations? (i.e. "will this be a dud in X years?")

Question a) is at least worth discussing, and I would say "maybe not, but it's a limitation that can be easily overcome by changing a few settings (and gaming at Ultra is kind of silly anyhow), and at those settings you'll likely encounter other limitations beyond VRAM". Question b), which is what you are alluding to with your GTX 770 reference, is pretty much out of the question, as baseline requirements (i.e. "can it run games at all?") aren't going to exceed 10GB in the next decade no matter what. Will you be able to play at 4k medium-high settings at reasonable frame rates with 10GB of VRAM in a decade? No - that would be unprecedented. But will you be able to play 1080p or 1440p at those types of settings with 10GB of VRAM? Almost undoubtedly (though shader performance is likely to force you to lower settings - but not the VRAM). And if you're expecting future-proofing to keep your GPU relevant at that kind of performance level for that kind of time, your expectations of what is possible is fundamentally flawed. The other parts of the GPU will be holding you back far more than the VRAM in that scenario. If the size of your framebuffer is making your five-year-old high-end GPU run at a spiky 10fps instead of, say, 34, does that matter at all? Unless the game in question is extremely slow-paced, you'd need to lower settings anyhow to get a reasonable framerate, which will then in all likelihood bring you below the 10GB limitation.

I'm all for future-proofing, and I absolutely hate the shortsighted hypermaterialism of the PC building scene - there's a reason I've kept my current GPU for five years - but adding $2-300 (high end GDDR chips cost somewhere in the realm of $20/GB) to the cost of a part to add something that in all likelihood won't add to its longevity at all is not smart future-proofing. If you're paying that to avoid one bottleneck just to be held back by another, you've overpaid for an unbalanced product.
I'm just going to leave this, 1440p, 60 FPS Skyrim SE video here and leave you all to your thoughts on this dilemma. Enjoy that 10GB "4K" card at 1440p. In the 'day before yesterday's' content at a lower res than what the card is marketed for.

Its not exactly the most niche game either... I guess 'Enthusiast' only goes as far as the limits Nvidia PR set for you with each release? Curious ;) Alongside this, I want to stress again TWO previous generations had weaker cards with 11GB. They'd run this game better than a new release, most likely.

Since its a well known game, we also know how the engine responds when VRAM is short. You get stutter, and its not pretty. Been there done that way back on a 770 with measly 2GB. Same shit, same game, different day.

So sure, if all you care about is playing every console port and you never mod anything, 10GB will do you fine. But a 3070 will then probably do the same thing for you, won't it? Nvidia's cost cutting measure works both ways, really. We can do it too. Between a 3080 with 20GB (at whatever price point it gets) and a 3070 with 8GB, the 3080 10GB is in a very weird place. Add consoles and a big Navi chip with 16GB and it gets utterly strange, if not 'the odd one out'. A bit like the place where SLI is now. Writing's on the wall.

moproblems99How much did the 2080 have that this is a direct replacement for though? Really gained vram this gen.
The 2080 is also 40-50% slower. 20% VRAM, 40-50% core? Sounds like good balance after the 2080S was already rivalling 1080ti's core power while being 3GB short.

The balance is shifting and its well known there is tremendous pressure on production lines for high end RAM. We're paying the price with reduced lifecycles on expensive product. That is the real thing happening here.
Posted on Reply
#100
nguyen
Vayra86I'm just going to leave this, 1440p, 60 FPS Skyrim SE video here and leave you all to your thoughts on this dilemma. Enjoy that 10GB "4K" card at 1440p. In the 'day before yesterday's' content at a lower res than what the card is marketed for.

Its not exactly the most niche game either... I guess 'Enthusiast' only goes as far as the limits Nvidia PR set for you with each release? Curious ;) Alongside this, I want to stress again TWO previous generations had weaker cards with 11GB. They'd run this game better than a new release, most likely.

Since its a well known game, we also know how the engine responds when VRAM is short. You get stutter, and its not pretty. Been there done that way back on a 770 with measly 2GB. Same shit, same game, different day.

So sure, if all you care about is playing every console port and you never mod anything, 10GB will do you fine. But a 3070 will then probably do the same thing for you, won't it? Nvidia's cost cutting measure works both ways, really. We can do it too. Between a 3080 with 20GB (at whatever price point it gets) and a 3070 with 8GB, the 3080 10GB is in a very weird place. Add consoles and a big Navi chip with 16GB and it gets utterly strange, if not 'the odd one out'. A bit like the place where SLI is now. Writing's on the wall.
LOL, FS 2020 would like to join the chat



Radeon VII performance number ?



Funny that Radeon VII is faster than 1080 Ti and Titan XP while slower than 2070 Super, 2080 and 2080 Super there.
Posted on Reply
Add your own comment
Aug 30th, 2024 12:19 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts