• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
I'm just going to leave this, 1440p, 60 FPS Skyrim SE video here and leave you all to your thoughts on this dilemma. Enjoy that 10GB "4K" card at 1440p. In the 'day before yesterday's' content at a lower res than what the card is marketed for.

Its not exactly the most niche game either... I guess 'Enthusiast' only goes as far as the limits Nvidia PR set for you with each release? Curious ;) Alongside this, I want to stress again TWO previous generations had weaker cards with 11GB. They'd run this game better than a new release, most likely.

Since its a well known game, we also know how the engine responds when VRAM is short. You get stutter, and its not pretty. Been there done that way back on a 770 with measly 2GB. Same shit, same game, different day.

So sure, if all you care about is playing every console port and you never mod anything, 10GB will do you fine. But a 3070 will then probably do the same thing for you, won't it? Nvidia's cost cutting measure works both ways, really. We can do it too. Between a 3080 with 20GB (at whatever price point it gets) and a 3070 with 8GB, the 3080 10GB is in a very weird place. Add consoles and a big Navi chip with 16GB and it gets utterly strange, if not 'the odd one out'. A bit like the place where SLI is now. Writing's on the wall.

LOL, FS 2020 would like to join the chat



Radeon VII performance number ?



Funny that Radeon VII is faster than 1080 Ti and Titan XP while slower than 2070 Super, 2080 and 2080 Super there.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
LOL, FS 2020 would like to join the chat



Radeon VII performance number ?



Funny that Radeon VII is faster than 1080 Ti and Titan XP while slower than 2070 Super, 2080 and 2080 Super there.

Cities Skylines is fun too ;)

12, 16, 20GB are not going to make sense.
20GB is $300 more expensive, for what. so that at unclear point in the future, when it would be obsolete anyway, so you could be granted one more game that runs a little better.
16GB price is very close to the 10GB card so you have to trade 30% less performance for 6GB more. I mean you are deliberately going to make a worse choice again because of some future uncertainty.
12GB only AMD will make a 192 bit card at this point. 3060 will be as weak as 2070. so why put 12GB on it.

The real point here is that Nvidia has not really found the best balance with Ampere on this node. The fact they decided upon these weird VRAM configs is telling. Why the 10/20 GB split at all, why force yourself to double it right away? They've been much more creative with this before on more refined nodes. I'm definitely not spending more than 500 on this gen, and that is a stretch already.

3080 10GB is going to be a mighty fine, super future proof 1440p product. 4K? Nope. Unless you like it faked with blurry shit all over the place in due time. Its unfortunate though that 1440p and 4K don't play nicely together with res scaling on your monitor.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'm just going to leave this, 1440p, 60 FPS Skyrim SE video here and leave you all to your thoughts on this dilemma. Enjoy that 10GB "4K" card at 1440p. In the 'day before yesterday's' content at a lower res than what the card is marketed for.

Its not exactly the most niche game either... I guess 'Enthusiast' only goes as far as the limits Nvidia PR set for you with each release? Curious ;) Alongside this, I want to stress again TWO previous generations had weaker cards with 11GB. They'd run this game better than a new release, most likely.

Since its a well known game, we also know how the engine responds when VRAM is short. You get stutter, and its not pretty. Been there done that way back on a 770 with measly 2GB. Same shit, same game, different day.

So sure, if all you care about is playing every console port and you never mod anything, 10GB will do you fine. But a 3070 will then probably do the same thing for you, won't it? Nvidia's cost cutting measure works both ways, really. We can do it too. Between a 3080 with 20GB (at whatever price point it gets) and a 3070 with 8GB, the 3080 10GB is in a very weird place. Add consoles and a big Navi chip with 16GB and it gets utterly strange, if not 'the odd one out'. A bit like the place where SLI is now. Writing's on the wall.
View attachment 169378



The 2080 is also 40-50% slower. 20% VRAM, 40-50% core? Sounds like good balance after the 2080S was already rivalling 1080ti's core power while being 3GB short.

The balance is shifting and its well known there is tremendous pressure on production lines for high end RAM. We're paying the price with reduced lifecycles on expensive product. That is the real thing happening here.
... and here we go again, 'round and 'round we go. This seems to need repeating ad infinitum: VRAM allocation does not equate to VRAM in use. Yet VRAM allocation is what monitoring software can see. So unless you are able to test the same game on an otherwise identical GPU with less VRAM and show it actually performing worse, this proves nothing beyond that the game is able to allocate a bunch of data that it might never use. Most game engines will aggressively allocate data to VRAM on the off chance that the data in question might be used - but the vast majority of it isn't.

Which is why DirectStorage exists, as its sole purpose is to make games quit this stupid behavior through allowing for faster streaming, incentivizing streaming in what is actually needed just before it's needed rather than accounting for every possible eventuality well before it might possibly happen as they do now. Given how important consoles are for game development, DS adoption will be widespread in the near future - well within the useful lifetime of the 3080. And VRAM allocations will as such shrink with its usage. Is it possible developers will use this now-free VRAM for higher quality textures and the like? Sure, that might happen, but it will take a lot of time. And it will of course increase the stress on the GPU's ability to process textures rather than its ability to keep them available, moving the bottleneck elsewhere. Also, seeing how Skyrim has already been remastered, if developers patched in DS support to help modders it really wouldn't be that big of a surprise.

Besides, using the most notoriously heavily modded game in the world to exemplify realistic VRAM usage scenarios? Yeah, that's representative. Sure. And of course, mod developers are focused on and skilled at keeping VRAM usage reasonable, right? Or maybe, just maybe, some of them are only focused at making their mod work, whatever the cost? I mean, pretty much any game with a notable amount of graphical mods can be modded to be unplayable on any GPU in existence. If that's your benchmark, no GPU will ever be sufficient for anything and you might as well give up. If your example of a representative use case is an extreme edge case, then it stands to reason that your components also need configurations that are - wait for it - not representative of the overall gaming scene. If your aim is to play Skyrim with enough mods that it eats VRAM for breakfast, then yes, you need more VRAM than pretty much anyone else. Go figure.

And again: you are - outright! - mixing up the two questions from my post, positing a negative answer to a) as also necessitating a negative answer to b). Please stop doing that. They are not the same question, and they aren't even directly related. All GPUs require you to lower settings to keep using them over time. Exactly which settings and how much depends on the exact configuration of the GPU in question. This does not mean that the GPU becomes unusable due to a singular bottleneck, as has been the case with a few GPUs over time. If 10GB of VRAM forces you to ever so slightly lower the settings on your heavily modded Skyrim build - well, then do so. And keep playing. It doesn't mean that you can't play, nor does it mean that you can't mod. Besides, if (going by your screenshot) it runs at 60fps on ... a 2080Ti? 1080Ti?, then congrats, you can now play at 90-120fps on the same settings? Isn't that pretty decent?
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Dont forget that going up in resolution wont change the VRAM usage much, if at all unless the game has higher res textures - so for example loading a 4K texture mod will be fairly similar at 1080p vs 4k, cause.... tada, the textures are at 4k.

If 10G or 11GB isnt enough for a certain title, just turn the friggin textures to ultra. I cant wait to see people cry over 'can it run crysis?' mode because waaaaah, max settings waaaah.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The real point here is that Nvidia has not really found the best balance with Ampere on this node. The fact they decided upon these weird VRAM configs is telling. Why the 10/20 GB split at all, why force yourself to double it right away? They've been much more creative with this before on more refined nodes. I'm definitely not spending more than 500 on this gen, and that is a stretch already.

3080 10GB is going to be a mighty fine, super future proof 1440p product. 4K? Nope. Unless you like it faked with blurry shit all over the place in due time. Its unfortunate though that 1440p and 4K don't play nicely together with res scaling on your monitor.
What? The production node of the chip has zero relation to the amount of VRAM (as long as you're able to fit the necessary amount of channels on the physical die, but that's more down to die size than the node as physical interconnects scale quite poorly with node changes). They didn't "force [themselves] to double" anything - GDDR6X exists in 1GB or 2GB chips (well, 2GB is still to come). As such, you can only fit 1x or 2x GB of the amount of channels on any given card. That's not an Nvidia-created limitation, it's how math works. I guess they could make a 12-channel 3080 Ti, but that would be a very weirdly placed SKU given the pricing and featureset of the 3090 (which is, of course, an extremely silly GPU all on its own).

As for "creative" ... sure, they could make arbitrarily higher VRAM amount versions by putting double density chips on some pad but not others. Would that help anything? Likely not whatsoever.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
... and here we go again, 'round and 'round we go. This seems to need repeating ad infinitum: VRAM allocation does not equate to VRAM in use. Yet VRAM allocation is what monitoring software can see. So unless you are able to test the same game on an otherwise identical GPU with less VRAM and show it actually performing worse, this proves nothing beyond that the game is able to allocate a bunch of data that it might never use. Most game engines will aggressively allocate data to VRAM on the off chance that the data in question might be used - but the vast majority of it isn't.

Which is why DirectStorage exists, as its sole purpose is to make games quit this stupid behavior through allowing for faster streaming, incentivizing streaming in what is actually needed just before it's needed rather than accounting for every possible eventuality well before it might possibly happen as they do now. Given how important consoles are for game development, DS adoption will be widespread in the near future - well within the useful lifetime of the 3080. And VRAM allocations will as such shrink with its usage. Is it possible developers will use this now-free VRAM for higher quality textures and the like? Sure, that might happen, but it will take a lot of time. And it will of course increase the stress on the GPU's ability to process textures rather than its ability to keep them available, moving the bottleneck elsewhere. Also, seeing how Skyrim has already been remastered, if developers patched in DS support to help modders it really wouldn't be that big of a surprise.

Besides, using the most notoriously heavily modded game in the world to exemplify realistic VRAM usage scenarios? Yeah, that's representative. Sure. And of course, mod developers are focused on and skilled at keeping VRAM usage reasonable, right? Or maybe, just maybe, some of them are only focused at making their mod work, whatever the cost? I mean, pretty much any game with a notable amount of graphical mods can be modded to be unplayable on any GPU in existence. If that's your benchmark, no GPU will ever be sufficient for anything and you might as well give up. If your example of a representative use case is an extreme edge case, then it stands to reason that your components also need configurations that are - wait for it - not representative of the overall gaming scene. If your aim is to play Skyrim with enough mods that it eats VRAM for breakfast, then yes, you need more VRAM than pretty much anyone else. Go figure.

And again: you are - outright! - mixing up the two questions from my post, positing a negative answer to a) as also necessitating a negative answer to b). Please stop doing that. They are not the same question, and they aren't even directly related. All GPUs require you to lower settings to keep using them over time. Exactly which settings and how much depends on the exact configuration of the GPU in question. This does not mean that the GPU becomes unusable due to a singular bottleneck, as has been the case with a few GPUs over time. If 10GB of VRAM forces you to ever so slightly lower the settings on your heavily modded Skyrim build - well, then do so. And keep playing. It doesn't mean that you can't play, nor does it mean that you can't mod. Besides, if (going by your screenshot) it runs at 60fps on ... a 2080Ti? 1080Ti?, then congrats, you can now play at 90-120fps on the same settings? Isn't that pretty decent?

Except Skyrim stutters when stuff is loaded into VRAM, I even specifically pointed that out earlier. So when you have less than you need maxed out, the experience immediately suffers. This goes for quite a few games using mods. Its not a streamlined as you might think. And yes, I posed it as a dilemma. My crystal ball says something else than yours, and I set the bar a little bit higher when it comes to 'what needs to be done' in due time to keep games playable on GPU XYZ. If current day content can already hit its limits... not a good sign.

Any time games need to resort to swapping and they cannot do that within the space of a single frame update, you will suffer stutter or frametime variance. I've gamed too much to ignore this and I will never get subpar VRAM GPUs again. The 1080 was perfectly balanced that way, always has an odd GB to spare no matter what you throw at it. This 3080, most certainly is not balanced the same way. That is all, and everyone can do with that experience based info whatever they want ;) I'll happily lose 5 FPS average for a stutter free experience.

What? The production node of the chip has zero relation to the amount of VRAM (as long as you're able to fit the necessary amount of channels on the physical die, but that's more down to die size than the node as physical interconnects scale quite poorly with node changes). They didn't "force [themselves] to double" anything - GDDR6X exists in 1GB or 2GB chips (well, 2GB is still to come). As such, you can only fit 1x or 2x GB of the amount of channels on any given card. That's not an Nvidia-created limitation, it's how math works. I guess they could make a 12-channel 3080 Ti, but that would be a very weirdly placed SKU given the pricing and featureset of the 3090 (which is, of course, an extremely silly GPU all on its own).

As for "creative" ... sure, they could make arbitrarily higher VRAM amount versions by putting double density chips on some pad but not others. Would that help anything? Likely not whatsoever.

Node has everything to do with VRAM setups because it also determines power/performance metrics and those relate directly to the amount of VRAM possible and what power it draws. In addition, the node directly weighs in on the yield/cost/risk/margin balance, as do VRAM chips. Everything is related.

Resorting to alternative technologies like DirectStorage and whatever Nvidia is cooking up itself is all well and good, but that reeks a lot like DirectX12's mGPU to me. We will see it in big budget games when devs have the financials to support it. We won't see it in the not as big games and... well... those actually happen to be the better games these days - hiding behind the mainstream cesspool of instant gratification console/MTX crap. Not as beautifully optimized, but ready to push a high fidelity experience in your face. The likes of Kingdom Come Deliverance, etc.

In the same vein, I don't want to get forced to rely on DLSS for playable FPS. Its all proprietary and per-game basis and when it works, cool, but when it doesn't, I still want to have a fully capable GPU that will destroy everything.

And here we go again ...

Allocated VRAM will be used at some point. All of you seem to think that applications just allocate buffers randomly for no reason at all to fill up the VRAM and then when it gets close to the maximum amount of memory available it all just works out magically such that the memory in use is always less than the one being allocated.

A buffer isn't allocated now and used an hour later, if an application is allocating a certain quantity of memory then it's going to be used pretty soon. And if the amount that gets allocated comes close to the maximum amount of VRAM available then it's totally realistic to expect problems.

Exactly, I always get the impression I'm discussion practical situations with theorycrafters when it comes to this. There are countless examples of situations that go way beyond the canned review benchmarks and scenes.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,504 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
... and here we go again, 'round and 'round we go. This seems to need repeating ad infinitum: VRAM allocation does not equate to VRAM in use.

And here we go again ...

Allocated VRAM will be used at some point. All of you seem to think that applications just allocate buffers randomly for no reason at all to fill up the VRAM and then when it gets close to the maximum amount of memory available it all just works out magically such that the memory in use is always less than the one being allocated.

A buffer isn't allocated now and used an hour later, if an application is allocating a certain quantity of memory then it's going to be used pretty soon. And if the amount that gets allocated comes close to the maximum amount of VRAM available then it's totally realistic to expect problems.
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Oh boy how entitled PC gamers are, demanding that a GPU must be capable of 5 years of usage at Ultra details :D.
News flash, Ultra details are not meant for current gen hardwares at all, they are there so that people think that their current hardwares suck.

There are always ways to kill the performance of any GPU, regardless of what their capabilites are. Games from 20 years ago, hell let path-traced them and kill every single GPU out there LOL.

Everything has compromise, you just gotta be aware of it and voila, problem gone. 700usd for 3080 10GB is already freaking sweet, 900usd for 3080 20GB ? heck no, even if 4K 120hz become the norm in 2 years (which it won't). That extra money will make better sense when spent on other component of the PC.
 
Joined
Jul 5, 2013
Messages
28,260 (6.75/day)
Your comparison to the 770 is as such a false equivalency
No, that comparison is perfectly valid in the context of the effect of VRAM size differences on a single GPU spec. A similar comparison could be made for the GTS8800 320MB VS 640MB, GTX9800 512MB VS 1GB, GTX560 1GB VS 2GB or the RX580 4GB vs 8GB. The extra ram is very helpful for the then current gen of software and for future software. As the card life-cycle matures the extra RAM becomes critical to the card's continued viability and usefulness. More VRAM generally means a GPU stays useful for longer period of time. This is a well known fact.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Oh boy how entitled PC gamers are, demanding that a GPU must be capable of 5 years of usage at Ultra details :D.
News flash, Ultra details are not meant for current gen hardwares at all, they are there so that people think that their current hardwares suck.

There are always ways to kill the performance of any GPU, regardless of what their capabilites are. Games from 20 years ago, hell let path-traced them and kill every single GPU out there LOL.

Everything has compromise, you just gotta be aware of it and voila, problem gone. 700usd for 3080 10GB is already freaking sweet, 900usd for 3080 20GB ? heck no, even if 4K 120hz become the norm in 2 years (which it won't). That extra money will make better sense when spent on other component of the PC.

As long as we're paying arms, legs, kidneys for our GPUs the last few generations I think we can use a bit of entitlement. Otherwise, I fully agree on your post. Its a compromise to be made, for getting your hands on that much core power at 700. I don't like that compromise, some might still like it. That is the personal choice we all have.

I'm glad we arrived at the point where we agree 10GB 3080's are a compromise to begin with.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
What are the expectations price performance for RTX3060?

Price ~$350
Perform ~RTX2070
That is where 5700XT is.

1600683217909.png
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
As long as we're paying arms, legs, kidneys for our GPUs the last few generations I think we can use a bit of entitlement. Otherwise, I fully agree on your post. Its a compromise to be made, for getting your hands on that much core power at 700. I don't like that compromise, some might still like it. That is the personal choice we all have.

I'm glad we arrived at the point where we agree 10GB 3080's are a compromise to begin with.

And the 3080 20GB's compromise will be its price, no one here doubted the fact that the more VRAM are better, just that the extra 10 GB will come with too much compromises that it totally negate its benefits.

No, that comparison is perfectly valid in the context of the effect of VRAM size differences on a single GPU spec. A similar comparison could be made for the GTS8800 320MB VS 640MB, GTX9800 512MB VS 1GB, GTX560 1GB VS 2GB or the RX580 4GB vs 8GB. The extra ram is very helpful for the then current gen of software and for future software. As the card life-cycle matures the extra RAM becomes critical to the card's continued viability and usefulness. More VRAM generally means a GPU stays useful for longer period of time. This is a well known fact.

My friend still play on an GTX 680 2GB that I sold him, still perfectly capable of DOTA 2 and CSGO at >120fps.
Having an extra 2GB would not make a 770 4GB be able to deliver 60fps at 1080p Ultra in games where the 2GB can't though, so it's moot point.
Having extra VRAM only make sense when you already have the best possible configuration, for example 780 Ti with 6GB VRAM instead of 3GB would make sense (yeah I know the original Titan card).
If you can spend a little more, make it count at the present time, not sometime into the future.

BTW I upgraded to the R9 290 from my old GTX 680, the R9 290 totally destroyed 770 4GB even then though.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
And the 3080 20GB's compromise will be its price, no one here doubted the fact that the more VRAM are better, just that the extra 10 GB will come with too much compromises that it totally negate its benefits.



My friend still play on an GTX 680 2GB that I sold him, still perfectly capable of DOTA 2 and CSGO at >120fps.
Having an extra 2GB would not make a 770 4GB be able to deliver 60fps at 1080p Ultra in games where the 2GB can't though, so it's moot point.
Having extra VRAM only make sense when you already have the best possible configuration, for example 780 Ti with 6GB VRAM instead of 3GB would make sense (yeah I know the original Titan card).
If you can spend a little more, make it count at the present time, not sometime into the future.

We don't know what a 3080 20G is priced at yet. But here you are in the same post contradicting your own thoughts between two responses... I think that underlines the weight of this compromise/dilemma quite fine. The only reason you feel 'stuck' with that 3080 10G is because you're already looking at a 2080ti here. But the actual, truly sound upgrade gen-to-gen would have been some order of a 2070/80(S) or a Pascal card >>> 3070 8G. Much better balance, good perf jump, and yes, you're not on the top end of performance that way... but neither are you with a 3080.

At the same time, a 3090 with 12GB is actually better balanced than the 3080 with 10. If anything it'd make total sense to follow your own advice and double that 3080 to 20GB as it realistically really is the top config. I might be wrong here, but I sense a healthy dose of cognitive dissonance in your specific case? You even provided your own present-day example of software (FS2020) that already surpasses 10GB... you've made your own argument.
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
We don't know what a 3080 20G is priced at yet. But here you are in the same post contradicting your own thoughts between two responses... I think that underlines the weight of this compromise/dilemma quite fine. The only reason you feel 'stuck' with that 3080 10G is because you're already looking at a 2080ti here. But the actual, truly sound upgrade gen-to-gen would have been some order of a 2070/80(S) or a Pascal card >>> 3070 8G. Much better balance, good perf jump, and yes, you're not on the top end of performance that way... but neither are you with a 3080.

At the same time, a 3090 with 12GB is actually better balanced than the 3080 with 10. If anything it'd make total sense to follow your own advice and double that 3080 to 20GB as it realistically really is the top config. I might be wrong here, but I sense a healthy dose of cognitive dissonance in your specific case? You even provided your own present-day example of software (FS2020) that already surpasses 10GB... you've made your own argument.

LOL I included 2 pictures of FS 2020 and you only looked at 1, idk who has cognitive dissonance here, did you see the Radeon VII lose against 2070 Super, 2080 and 2080 Super while it does beat 1080 Ti and Titan XP ? VRAM allocation make zero difference there.

All I am saying is doubling down on VRAM with second tier or third tier GPU doesn't make any sense, when you can spend a little more and get the next tier of GPU. 3070 16GB at 600usd, why not just buy the 3080 10GB ? buying 3080 20GB ? yeah just use that extra USD for better RAM where it make a difference in 99% of the time.

I'm fine with 3090 24GB though, which is the highest performance GPU out there. Its only weakness is the price :D.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
you can't distinguish unless zooming 4x into a recording LOL.
I recall I've heard this recently. It was in the context of hyping some upscaling tech.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
LOL I included 2 pictures of FS 2020 and you only looked at 1, idk who has cognitive dissonance here, did you see the Radeon VII lose against 2070 Super, 2080 and 2080 Super while it does beat 1080 Ti and Titan XP ? VRAM allocation make zero difference there.

All I am saying is doubling down on VRAM with second tier or third tier GPU doesn't make any sense, when you can spend a little more and get the next tier of GPU. 3070 16GB at 600usd, why not just buy the 3080 10GB ? buying 3080 20GB ? yeah just use that extra USD for better RAM where it make a difference in 99% of the time.

I'm fine with 3090 24GB though, which is the highest performance GPU out there. Its only weakness is the price :D.

What struck me on the Radeon VII shot is in fact that if you compare AMD to AMD cards, the higher VRAM card performs a whole lot better than the 5700XT that should have about the same core power. In addition, the VII also surpasses the 1080ti where it normally couldn't. I did look at both pics, just from a different angle. I also NEVER said that the higher VRAM card would provide better FPS, in fact I said the opposite: I would gladly sacrifice a few FPS if that means stutter free.

Remember that Turing is also endowed with a much bigger cache.
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Remember that Turing is also endowed with a much bigger cache.

And Ampere has 2x the cache size of Turing, and also the next generation of lossless memory compression.
That makes 8GB VRAM of Ampere behave very different from 8GB of Turing, just saying.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
And Ampere has 2x the cache size of Turing

Correct, so that puts an awful lot of weight on cache now. Add RT... The bottleneck is moving. Maybe Nvidia can magic its way out of it. I hope so. But its too early to tell.
Note also that stutter is back with Turing - we've had several updates and games showing us that. Not unusual for a new gen, but still. They need to put work into preventing it. Latency modes are another writing on the wall. We never needed them... ;)

I'm counting all those little tweaks they do and the list is getting pretty damn long. That's a lot of stuff needing to align for good performance.
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Correct, so that puts an awful lot of weight on cache now. Add RT... The bottleneck is moving. Maybe Nvidia can magic its way out of it. I hope so. But its too early to tell.
Note also that stutter is back with Turing - we've had several updates and games showing us that. Not unusual for a new gen, but still. They need to put work into preventing it. Latency modes are another writing on the wall. We never needed them... ;)

I'm counting all those little tweaks they do and the list is getting pretty damn long. That's a lot of stuff needing to align for good performance.


No one can tell what the future requirements are, just spend your money where it make noticeable differences in the present time.
If you are bothered with perf/usd then extra VRAM should be your no go zone.
If you aren't bothered with perf/usd then knock yourself out with the 3090 :D, which I assume the people buying them already have the best CPU/RAM combo, otherwise it's a waste...

Edit: I'm playing on a 2070 Super MaxQ laptop (stuck in mandatory quarantine) with the newest driver, HAGS ON and games are butterly smooth. Which game are you talking about, perhaps I can check.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
And here we go again ...

Allocated VRAM will be used at some point. All of you seem to think that applications just allocate buffers randomly for no reason at all to fill up the VRAM and then when it gets close to the maximum amount of memory available it all just works out magically such that the memory in use is always less than the one being allocated.

A buffer isn't allocated now and used an hour later, if an application is allocating a certain quantity of memory then it's going to be used pretty soon. And if the amount that gets allocated comes close to the maximum amount of VRAM available then it's totally realistic to expect problems.
What? No. That is not true whatsoever. Games allocate data for many possible future game states, of which only a handful can come true. If a player in a game is in a specific position, looking in a specific direction, then the game streams in data to account for the possible future movements of that player within the relevant pre-loading time window, which is determined by expected transfer speed. They could, for example, turn around rapidly while not moving otherwise, or run forwards, backwards, strafe, or turn around and run in any direction for whatever distance is allowed by movement speed in the game, but they wouldn't suddenly warp to the opposite end of the map or inside of a building far away. If the player is standing next to a building or large object that takes several seconds to move around, what is behind said object doesn't need to be loaded until the possibility of it being seen arises. Depending on level design etc., this can produce wildly varying levels of pre-caching (an open world requires a lot more than a game with small spaces), but the rule of thumb is to tune the engine to pre-load anything that might reasonably happen. I.e. if you're next to a wall, texture data for the room on the other side isn't pre-loaded, but they will be if you are right next to a door.

As time passes and events happen (player movement, etc.), unused data is ejected from memory to allow for pre-caching of the new possibilities afforded by the new situation. If not, then pretty much the entire game would need to live in VRAM. Which of course repeats for as long as the game is running. Some data might be kept and not cleared out as it might still be relevant for a possible future use, but most of it is ejected. The more possible scenarios that are pre-loaded, the less of this data is actually used for anything. Which means that the slower the expected read rate of the storage medium used, the lower the degree of utilization becomes as slower read rates necessitate earlier pre-caching, expanding the range of possible events that need to be accounted for.

Some of this data will of course be needed later, but at that point it will long since have been ejected from memory and pre-loaded once again. Going by Microsoft's data (which is to be very accurate, they do after all make DirectX, so they should have the means to accurately monitor this), DirectStorage improves the degree of utilization of data in VRAM by 2.5x. Assuming they achieved a 100% utilization rate (which they obviously didn't, as that would require their caching/streaming algorithms to be effectively prescient), that means at the very best their data showed a 40% rate of utilization before DirectStorage - i.e. current DirectX games are at most making use of 40% of the data they are pre-loading before clearing it out. If MS achieved a more realistic rate of utilization - say 80% - that means the games they started from utilized just 32% of pre-loaded data before clearing it out. There will always be some overhead, so going by this data alone it's entirely safe to say based on this information that current games cache a lot more data than they use.

And no, this obviously isn't happening on timescales anywhere near an hour - we're talking millisecond time spans here. Pre-caching is done for perhaps the next few seconds, with data ejection rolling continuously as the game state changes - likely frame-by-frame. That's why moving to SSD storage as the default for games is an important step in improving this - the slow seek times and data rates of HDDs necessitate multi-second pre-caching, while adopting even a SATA SSD as the baseline would dramatically reduce the need for pre-caching.

And that is the point here: DS and SSD storage as a baseline will allow for less pre-caching, shortening the future time span for which the game needs possibly necessary data in VRAM, thus significantly reducing VRAM usage. You obviously can't tell which of the pre-loaded data is unnecessary until some other data has been used (if you could, there would be no need to pre-load it!). The needed data might thus just as well live in that ~1GB exceeding a theoretical 8GB VRAM pool for that Skyrim screenshot as in the 8GB that are actually there. But this is exactly why faster transfer rates help alleviate this, as you would then need less time to stream in the necessary data. If a player is in an in-game room moving towards a door four seconds away, with a three-second pre-caching window, data for what is beyond the door will need to start streaming in in one second. If faster storage and DirectStorage (though the latter isn't strictly necessary for this) allows the developers to expect the same amount of data to be streamed in in, say, 1/6th of the time - which is reasonable even for a SATA SSD given HDD seek times and transfer speeds - that would mean data streaming doesn't start until 2.5s later. For that time span, VRAM usage is thus reduced by as much as whatever amount of data was needed for the scene beyond the door. And ejection can also be done more aggressively for the same reason, as once the player has gone through the door the time needed to re-load that area is similarly reduced. Thus, the faster data can be loaded, the less VRAM is needed at the same level of graphical quality.
No, that comparison is perfectly valid in the context of the effect of VRAM size differences on a single GPU spec. A similar comparison could be made for the GTS8800 320MB VS 640MB, GTX9800 512MB VS 1GB, GTX560 1GB VS 2GB or the RX580 4GB vs 8GB. The extra ram is very helpful for the then current gen of software and for future software. As the card life-cycle matures the extra RAM becomes critical to the card's continued viability and usefulness. More VRAM generally means a GPU stays useful for longer period of time. This is a well known fact.
That is only true if the base amount of VRAM becomes an insurmountable obstacle which cannot be circumvented by lowering settings. Which is why this is applicable to something like a 2GB GPU, but won't be in the next decade for a 10GB one. The RX 580 is an excellent example, as the scenarios in which the 4GB cards are limited are nearly all scenarios in which the 8GB one also fails to deliver sufficient performance, necessitating lower settings no matter what. This is of course exacerbated by reviewers always testing at Ultra settings, which typically increase VRAM usage noticeably without necessarily producing a matching increase in visual quality. If the 4GB one produces 20 stuttery/spiky frames per second due to a VRAM limitation but the 8GB one produces 40, the best thing to do (in any game where frame rate is really important) would be to lower settings on both - in which case they are likely to perform near identically, as VRAM use drops as you lower IQ settings.
Except Skyrim stutters when stuff is loaded into VRAM, I even specifically pointed that out earlier. So when you have less than you need maxed out, the experience immediately suffers. This goes for quite a few games using mods. Its not a streamlined as you might think. And yes, I posed it as a dilemma. My crystal ball says something else than yours, and I set the bar a little bit higher when it comes to 'what needs to be done' in due time to keep games playable on GPU XYZ. If current day content can already hit its limits... not a good sign.

Any time games need to resort to swapping and they cannot do that within the space of a single frame update, you will suffer stutter or frametime variance. I've gamed too much to ignore this and I will never get subpar VRAM GPUs again. The 1080 was perfectly balanced that way, always has an odd GB to spare no matter what you throw at it. This 3080, most certainly is not balanced the same way. That is all, and everyone can do with that experience based info whatever they want ;) I'll happily lose 5 FPS average for a stutter free experience.
Do the stutters kick in immediately once you exceed the size of the framebuffer? Or are you comparing something like an 8GB GPU to an 11GB GPU at settings allocating 9-10GB for those results? If the latter, then that might just as well be indicative of a poor pre-caching system (which is definitely not unlikely in an old and heavily modded game).
Node has everything to do with VRAM setups because it also determines power/performance metrics and those relate directly to the amount of VRAM possible and what power it draws. In addition, the node directly weighs in on the yield/cost/risk/margin balance, as do VRAM chips. Everything is related.
Yes, everything is related, but you presented that as a causal relation, which it largely isn't, as there are multiple steps in between the two which can change the outcome of the relation.
Resorting to alternative technologies like DirectStorage and whatever Nvidia is cooking up itself is all well and good, but that reeks a lot like DirectX12's mGPU to me. We will see it in big budget games when devs have the financials to support it. We won't see it in the not as big games and... well... those actually happen to be the better games these days - hiding behind the mainstream cesspool of instant gratification console/MTX crap. Not as beautifully optimized, but ready to push a high fidelity experience in your face. The likes of Kingdom Come Deliverance, etc.
From what's been presented, DS is not going to be a complicated API to implement - after all, it's just a system for accelerating streaming and decompression of data compressed with certain algorithms. It will always take time for new tech to trickle down to developers with less resources, but the possible gains from this makes it a far more likely candidate for adoption than somethinglike DX12 mGPU - after all, reducing VRAM utilization can directly lead to less performance tuning of the game, lowering the workload on developers.

This tech sounds like a classic example of "work smart, not hard", where the classic approach has been a wildly inefficient brute-force scheme but this tech finally seeks to actually load data into VRAM in a smart way that minimizes overhead.
In the same vein, I don't want to get forced to rely on DLSS for playable FPS. Its all proprietary and per-game basis and when it works, cool, but when it doesn't, I still want to have a fully capable GPU that will destroy everything.
I entirely agree on that, but it's hardly comparable. DLSS is proprietary and only works on hardware from one vendor on one platform, and needs significant effort for implementation. DS is cross-platform and vendor-agnostic (and is likely similar enough in how it works to the PS5's system that learning both won't be too much work). Of course a system not supporting it will perform worse and need to fall back to "dumb" pre-caching, but that's where the baseline established by consoles will serve to raise the baseline featureset over the next few years.
 
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
That isn't really the question though - the question is whether the amount of VRAM will become a noticeable bottleneck in cases where shader performance isn't. And that, even though this card is obviously a beast, is quite unlikely. Compute requirements typically increase faster than VRAM requirements (if for no other reason that the amount of VRAM on common GPUs increases very slowly, forcing developers to keep VRAM usage somewhat reasonable), so this GPU is far more likely to be bottlenecked by its core and architecture rather than having "only" 10GB of VRAM. So you'll be forced to lower settings for reasons other than running out of VRAM in most situations.
With so many in here loving to discuss specs, it's strange that they can't spot the obvious;
With RTX 3080's 760 GB/s bandwidth, if you target 144 FPS, that leaves 5.28 GB of bandwidth per frame if you utilize it 100% perfectly. Considering most games use multiple render passes, they will read the same resources multiple times and will read temporary data back again, I seriously doubt a game will use more than ~2 GB of unique texture and mesh data in a frame, so why would RTX 3080 need 20 GB of VRAM then?

As you were saying, computational and bandwidth requirements grow with VRAM usage, often they grow even faster too.

Any time games need to resort to swapping and they cannot do that within the space of a single frame update, you will suffer stutter or frametime variance.
Any time data needs to be swapped from system memory etc., there will be a penalty, there no doubt about that. It's a latency issue, no amount of bandwidth for PCIe or SSDs will solve this. So you're right so far.

Games have basically two ways of managing resources;
- No active management - everything is allocated during loading (still fairly common). The driver may swap if needed.
- Resource streaming

The 1080 was perfectly balanced that way, always has an odd GB to spare no matter what you throw at it. This 3080, most certainly is not balanced the same way.
This is where I have a problem with your reasoning, where is the evidence of this GPU being unbalanced?
3080 is two generations newer than 1080, it has 2 GB more VRAM, more advanced compression, more cache and a more advanced design which may utilize the VRAM more efficiently. Where is your technical argument for this being less balanced?
I'll say the truth is in benchmarking, not in anecdotes about how much VRAM "feels right". :rolleyes:
 
Last edited:
Joined
Jan 8, 2017
Messages
9,504 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
What? No. That is not true whatsoever. Games allocate data for many possible future game states, of which only a handful can come true. If a player in a game is in a specific position, looking in a specific direction, then the game streams in data to account for the possible future movements of that player within the relevant pre-loading time window, which is determined by expected transfer speed. They could, for example, turn around rapidly while not moving otherwise, or run forwards, backwards, strafe, or turn around and run in any direction for whatever distance is allowed by movement speed in the game, but they wouldn't suddenly warp to the opposite end of the map or inside of a building far away. If the player is standing next to a building or large object that takes several seconds to move around, what is behind said object doesn't need to be loaded until the possibility of it being seen arises. Depending on level design etc., this can produce wildly varying levels of pre-caching (an open world requires a lot more than a game with small spaces), but the rule of thumb is to tune the engine to pre-load anything that might reasonably happen. I.e. if you're next to a wall, texture data for the room on the other side isn't pre-loaded, but they will be if you are right next to a door.

None of this even remotely makes sense but I learnt my lesson not to argue with you because you're never going to admit that, so I wont.

All data has to be there, it doesn't matter that you are rendering only a portion of a scene, all assets that need to be in that scene must be already in memory. They are never loaded based on the possibility of something happening or whatever.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
Lowering texture quality...
1600689283430.png

No thank you with your 640kb VRAM is all you'll ever need...
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
None of this even remotely makes sense but I learnt my lesson not to argue with you because you're never going to admit that, so I wont.

All data has to be there, it doesn't matter that you are rendering only a portion of a scene, all assets that need to be in that scene must be already in memory. They are never loaded based on the possibility of something happening or whatever.
So asset streaming doesn't exist? Either you are posting this through a time machine (if so: please share!) or you need to update your thinking.
 
Joined
Feb 3, 2017
Messages
3,822 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Mods are a very bad example. These commonly use data or methods that would not fly at all in any real game but get a pass "because it is just a mod".
 
Top