• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Readies RTX 3060 8GB and RTX 3080 20GB Models

Joined
Nov 11, 2016
Messages
3,393 (1.16/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
With every generation of consoles memory requirements ballooned, it happened with PS3/360, it also happened with PS4/Xbox One. Why are people convinced that it's not going to happen this time around is beyond me.

Nvidia's marketing is top notch, of course they would never admit that VRAM is going to be a limitation in some regards and provide sensible arguments. But then again, they were also the same people who told everyone that they believe unified shaders are not the future for high performance GPUs back when ATI introduced them for the first time as well.
It wont do that on 1080p, nice diversion.

The 980 or any other 4GB card isn't even included, you can guess why. You should have also read what they said :
The evidence that VRAM is a real limiting factor is everywhere, you just have to stop truing a blind eye to it.

And somehow lowering the detail from Ultra to High is too hard for you ?
Also you can see 2060 6GB being faster than 1080 8GB there, even at 4K Ultra. Nvidia improves the memory compression algorithm every generation that 8GB VRAM on Ampere does not act the same way as 8GB VRAM on Turing or Pascal (AMD is even further off).


Just look at the VRAM usage between 2080 Ti vs 3080, the 3080 always use less VRAM, that how Nvidia memory compression works...

I would rather have a hypothetical 3080 Ti with 12GB VRAM on 384 bit bus rather than 20GB VRAM on 320bit bus, bandwidth over useless capacity anyday. At least higher VRAM bandwidth will instantly give higher performance on today games, not 5 years down the line when these 3080 can be had for 200usd...
 
Last edited:
Joined
Jun 29, 2009
Messages
2,012 (0.36/day)
Location
Heart of Eutopia!
System Name ibuytheusedstuff
Processor 5960x
Motherboard x99 sabertooth
Cooling old socket775 cooler
Memory 32 Viper
Video Card(s) 1080ti on morpheus 1
Storage raptors+ssd
Display(s) acer 120hz
Case open bench
Audio Device(s) onb
Power Supply antec 1200 moar power
Mouse mx 518
Keyboard roccat arvo
would really like to know a gtx 780 3gb versus 6gb outcome.
i know its old tech but i think if i can get a cheap 6gb card .......
 
Joined
Jan 8, 2017
Messages
9,404 (3.29/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
And somehow lowering the detail from Ultra to High is too hard for you ?

On a mid range card or a high end one from 6 years ago ? No, it wouldn't be. Having to do that with a high end card that you bought today a year or two from now, it would be kind of pathetic.

I would rather have a hypothetical 3080 Ti with 12GB VRAM on 384 bit bus rather than 20GB VRAM on 320bit bus, bandwidth over useless capacity anyday.

Wouldn't you rather stick with the 10GB ? I get the sense that all this extra VRAM would be useless, so why would you want something with more memory ?

Just look at the VRAM usage between 2080 Ti vs 3080, the 3080 always use less VRAM, that how Nvidia memory compression works...

That's not how memory compression works at all, the memory that you see there is allocated memory. The compression takes place internally onto the GPU on some level, in other words the 1 GB that was allocated has to remain visible and be addressed as 1GB at all times otherwise it would break the application. So the effects of the compression are not visible to the outside application. The compression is only used on color data anyway, which is only a portion of what you need to render a scene so a lot of the data isn't actually compressible.

That being said the reason more memory is used on the 2080ti is because the more memory available the more allocations are going to take place. Similar to how Windows is going to report higher memory usage when more RAM is available. Memory allocation requests are queued up and may not take place at the exact time they are issued by the application.
 
Last edited:

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.37/day)
3080 doesnt have 5 years before it can be had for $200. GTX 780 3GB was humiliated by GTX 970 only 500 days later not to mention 780Ti that didnt last for 300 days before getting slashed in half.

even if Nvidia released 3080 with 384bit bus I still wouldn't buy it. save the money for later when we can have a decent 1008TBs 12GB card on 6nm EUV that clocks 30% higher.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
The Xbox Series X has 16Gb of RAM, of which 2.5GB is reserved for the OS and the remaining 13.5GB is available for software. 10GB of those 13.5 are of the full bandwidth (560GB/s?) variety, with the remaining 3.5GB being slower due to that console's odd two-tiered RAM configuration. That (likely) means that games will at the very most use 10GB of VRAM, though the split between game RAM usage and VRAM is very likely not going to be 3.5:10. Those would be edge cases at the very best. Sony hasn't disclosed this kind of data, but given that the PS5 has a less powerful GPU, it certainly isn't going to need more VRAM than the XSX.
Comparing consoles to PCs is like apples and pears. For one: consoles use software totally differently than PCs, they often come with locked resolutions and frame rates, etc. For two: no console's performance has ever risen above that of a mid-range PC from the same era, and I don't think it's ever gonna change (mostly due to price and size/cooling restrictions). Devs tweak game settings to accommodate to hardware specifications on consoles, whereas on PC, you have the freedom to use whatever settings you like.

Example: playing The Witcher 3 on an Xbox One with washed out textures at 900p is not the same as playing it on a similarly aged high-end PC with ultra settings at 1080p or 1440p.
 
Joined
Nov 11, 2016
Messages
3,393 (1.16/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
On a mid range card or a high end one from 6 years ago ? No, it wouldn't be. Having to do that with a high end card that you bought today a year or two from now, it would be kind of pathetic.
Wouldn't you rather stick with the 10GB ? I get the sense that all this extra VRAM would be useless, so why would you want something with more memory ?

For 100-200usd more I would get a hypothetical 3080 Ti with 12GB VRAM on 384bit bus, not the 3080 with 20GB . However this 3080 Ti would jeopardize 3090 sale so I'm guessing Nvidia would release them much later.

The 3080 PCB already has the space for 2 extra VRAM modules


I had to lower some details on pretty much top of the line GPUs like Titan X Maxwell, 1080 Ti and 2080 Ti during the first year of owning them, not even at 4K. Ultra details setting is pretty much only used for benchmarks, IRL people tend to prefer higher FPS rather than some IQ you can't distinguish unless zooming 4x into a recording LOL.


The new norm for 2021 onward should be RT Reflections + High Details, let just hope so :D
 
Last edited:
Joined
Feb 3, 2017
Messages
3,732 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
For one: consoles use software totally differently than PCs, they often come with locked resolutions and frame rates, etc. For two: no console's performance has ever risen above that of a mid-range PC from the same era, and I don't think it's ever gonna change (mostly due to price and size/cooling restrictions). Devs tweak game settings to accommodate to hardware specifications on consoles, whereas on PC, you have the freedom to use whatever settings you like.
That used to be true maybe even 2 generations ago but today consoles practically are PC's. x86 CPUs, GPUs on mainstream architectures, common buses, RAM etc. XB1/XBX and upcoming XBSX/XBSS are running on Windows kernel and DirectX APIs, Sony is running FreeBSD-based OS with partially custom APIs. If there were not artificial restrictions to keep the console garden walls in place, me and you could run all that on our hardware. Resolutions and frame rates, as well as the game setting are up to the developer and only benefit for consoles is the predefined spec - optimizing game to a specific machine or two is much much easier than creating the low-medium-high-ultra settings that work well across swaths of different hardware.

Console performances have been above midrange PC of the same era in the past - PS3/Xbox360 were at the level of (very) high-end PC at launch. PS4/XB1 were (or at least seemed to be) an outlier with decidedly mediocre hardware. XBSX/PS5 are at the level of high-end hardware as of half a year ago when their specs were finalized which is back to normal. XBSS is kind of the weird red-headed stepchild that gets bare minimum and we'll have to see how it fares. PC hardware performance ceiling has been climbing for a while now. Yes, high-end stuff is getting incredibly expensive and gets worse and worse in terms of bang-per-buck but it is getting relatively more and more powerful as well.
 
Joined
Sep 1, 2020
Messages
2,315 (1.51/day)
Location
Bulgaria
Consoles...
Unreal Engine 5? Next DX "superultimate"? 8k; Nvidia Hopper; AMD RDNA3/4; GDDR7; HBM3(+)/4...to get your money!
Has many reasons to put RTX 30xx with "small" or "big" VRAM size in history. Regardless of whether and to what extent they are justified.
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Comparing consoles to PCs is like apples and pears. For one: consoles use software totally differently than PCs, they often come with locked resolutions and frame rates, etc. For two: no console's performance has ever risen above that of a mid-range PC from the same era, and I don't think it's ever gonna change (mostly due to price and size/cooling restrictions). Devs tweak game settings to accommodate to hardware specifications on consoles, whereas on PC, you have the freedom to use whatever settings you like.

Example: playing The Witcher 3 on an Xbox One with washed out textures at 900p is not the same as playing it on a similarly aged high-end PC with ultra settings at 1080p or 1440p.
I wasn't the one starting with the console-to-PC comparisons here, I was just responding. Besides, comparing the upcoming consoles to previous generations is ... tricky. Not only are they architecturally more similar to PCs than ever (true, the current generation are also X86+AMD GPU, but nobody ever gamed on a Jaguar CPU), but unlike last time both console vendors are investing significantly in hardware (the PS4 at launch performed about the same as a lower midrange dGPU, at the time a ~$150 product; upcoming consoles will likely perform similarly to the 2070S (and probably 3060), a $500 product). No previous console's performance has risen above a mid-range PC from the same era, but the upcoming ones are definitely trying to change that. Besides that, did I ever say that playing a game at on a console at one resolution was the same as playing the same game on a PC at a different one? Nice straw man you've got there. Stop putting words in my mouth. The point of making console comparisons this time around is that there has never been a closer match between consoles and PCs in terms of actual performance than what we will have this November.
There you have it. Who decided PC dGPU is developed for a 3 year life cycle? It certainly wasn't us. They last double that time without any hiccups whatsoever, and even then hold resale value. Especially the high end. I'll take 4-5 if you don't mind. The 1080 I'm running now, makes 4-5 just fine, and then some. The 780ti I had prior, did similar, and they both had life in them still.

Two GPU upgrades per console gen is utterly ridiculous and unnecessary, since we all know the real jumps happen with real console gen updates.
Well, don't blame me, I didn't make GPU vendors think like this. Heck, I'm still happily using my Fury X (though it is starting to struggle enough at 1440p that it's time to retire it, but I'm very happy with its five-year lifespan). I would say this stems from being in a highly competitive market that has historically had dramatic shifts in performance in relatively short time spans, combined with relatively open software ecosystems (the latter of which consoles have avoided, thus avoiding the short term one-upmanship of the GPU market). That makes software a constantly moving target, and thus we get the chicken-and-egg-like race of more demanding games requiring faster GPUs allowing for even more demanding games requiring even faster GPUs, etc., etc. Of course, as the industry matures those time spans are growing, and given slowdowns in both architectural improvements and manufacturing nodes any high end GPU from today is likely to remain relevant for at least five years, though likely longer, and at a far higher quality of life for users than previous ones.

That being said, future-proofing by adding more VRAM is a poor solution. Sure, the GPU needs to have enough VRAM, there is no question about that. It needs an amount of VRAM and a bandwidth that both complement the computational power of the GPU, otherwise it will quickly become bottlenecked. The issue is that VRAM density and bandwidth both scale (very!) poorly with time - heck, base DRAM die clocks haven't really moved for a decade or more, with only more advanced ways of packing more data into the same signal increasing transfer speeds. But adding more RAM is very expensive too - heck, the huge framebuffers of current GPUs are a lot of the reason for high end GPUs today being $700-1500 rather than the $200-400 of a decade ago - die density has just barely moved, bandwidth is still an issue, requiring more complex and expensive PCBs, etc. I would be quite surprised if the BOM cost of the VRAM on the 3080 is below $200. Which then begs the question: would it really be worth it to pay $1000 for a 3080 20GB rather than $700 for a 3080 10GB, when there would be no perceptible performance difference for the vast majority of its useful lifetime?

The last question is particularly relevant when you start to look at VRAM usage scaling and comparing the memory sizes in question to previous generations where buying the highest VRAM SKU has been smart. Remember, scaling on the same with bus is either 1x or 2x (or potentially 4x I guess), so like we're discussing here, the only possible step is to 20GB - which brings with it a very significant cost increase. The base SKU has 10GB, which is the second highest memory count of any consumer-facing GPU in history. Even if it's comparable to the likely GPU-allocated amount of RAM on upcoming consoles, it's still a very decent chunk. On the other hand, previous GPUs with different memory capacities have started out much lower - 3/6GB for the 1060, 4/8GB for a whole host of others, and 2/4GB for quite a few if you look far enough back. The thing here is: while the percentage increases are always the same, the absolute amount of VRAM now is massively higher than in those cases - the baseline we're currently talking about is higher than the high end option of the previous comparisons. What does that mean? For one, you're already operating at such a high level of performance that there's a lot of leeway for tuning and optimization. If a game requires 1GB more VRAM than what's available, lowering settings to fit that within a 10GB framebuffer will be trivial. Doing the same on a 3GB card? Pretty much impossible. A 2GB reduction in VRAM needs is likely more easily done on a 10GB framebuffer than a .5GB reduction on a 3GB framebuffer. After all, there is a baseline requirement that is necessary for the game to run, onto which additional quality options add more. Raising the ceiling for maximum VRAM doesn't as much shift the baseline requirement upwards (though that too creeps upwards over time) as it expands the range of possible working configurations. Sure, 2GB is largely insufficient for 1080p today, but 3GB is still fine, and 4GB is plenty (at settings where GPUs with these amounts of VRAM would actually be able to deliver playable framerates). So you previously had a scale from, say, .5-4GB, then 2-8GB, and in the future maybe 4-12GB. Again, looking at percentages is misleading, as it takes a lot of work to fill those last few GB. And the higher you go, the easier it is to ease off on a setting or two without perceptibly losing quality. I.e. your experience will not change whatsoever, except that the game will (likely automatically) lower a couple of settings a single notch.

Of course, in the time it will take for 10GB to become a real limitation at 4k - I would say at minimum three years - the 3080 will likely not have the shader performance to keep up anyhow, making the entire question moot. Lowering settings will thus become a necessity no matter the VRAM amount.

So, what will you then be paying for with a 3080 20GB? Likely 8GB of VRAM that will never see practical use (again, it will more than likely have stuff allocated to it, but it won't be used in gameplay), and the luxury of keeping a couple of settings pegged to the max rather than lowering them imperceptibly. That might be worth it to you, but it certainly isn't for me. In fact, I'd say it's a complete waste of money.
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
That used to be true maybe even 2 generations ago but today consoles practically are PC's. x86 CPUs, GPUs on mainstream architectures, common buses, RAM etc. XB1/XBX and upcoming XBSX/XBSS are running on Windows kernel and DirectX APIs, Sony is running FreeBSD-based OS with partially custom APIs. If there were not artificial restrictions to keep the console garden walls in place, me and you could run all that on our hardware. Resolutions and frame rates, as well as the game setting are up to the developer and only benefit for consoles is the predefined spec - optimizing game to a specific machine or two is much much easier than creating the low-medium-high-ultra settings that work well across swaths of different hardware.
This is true.
Console performances have been above midrange PC of the same era in the past - PS3/Xbox360 were at the level of (very) high-end PC at launch. PS4/XB1 were (or at least seemed to be) an outlier with decidedly mediocre hardware. XBSX/PS5 are at the level of high-end hardware as of half a year ago when their specs were finalized which is back to normal. XBSS is kind of the weird red-headed stepchild that gets bare minimum and we'll have to see how it fares. PC hardware performance ceiling has been climbing for a while now. Yes, high-end stuff is getting incredibly expensive and gets worse and worse in terms of bang-per-buck but it is getting relatively more and more powerful as well.
I disagree. To stay with my original example, The Witcher 3 (a game from 2015) ran on the Xbox One (a late 2013 machine) with reduced quality settings at 900p to keep frame rates acceptable. Back in that time, I had an AMD FX-8150 (no need to mention how bad that CPU was for gaming) and a Radeon HD 7970 which was the flagship AMD card in early 2012. I could still run the game at 1080p with ultra settings (except for hairworks) between 40-60 FPS depending on the scene.

It's true that consoles get more and more powerful with every generation, but so do PCs, and I honestly can't see a $500 machine beat a $2,000 one ever. And thus, game devs will always have to impose limitations to make games run as smoothly on consoles as they do on high-end PCs, making the two basically incomparable, even in terms of VRAM requirements.
 
Joined
Feb 3, 2017
Messages
3,732 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
No previous console's performance has risen above a mid-range PC from the same era, but the upcoming ones are definitely trying to change that.
Like I said, PS4/XB1 generation seems to be a fluke. We only need to look at the generation before that:
- PS3's (Nov 2006) RSX is basically hybrid of 7800GTX (June 2005, $600) and 7900GTX (March 2006, $500).
- XBox360's (Nov 2005) Xenos is X1800/X1900 hybrid with X1800XL (Oct 2005, $450) probably the closest match.

While more difficult to compare, CPUs were high-end as well. Athlon 64 X2s came out in mid-2005. PowerPC-based CPUs in both consoles were pretty nicely multithreaded before that was a mainstream thing for PCs.
 

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.37/day)
3080 20G has 1536 more shaders and possibly SLi, not the same card at all. 10G is just perfect if you find the right balance that varies game to game. But can't under volt it 0.95 at stock 1960, overclock is nonexistent. Twice as fast as 2070 at twice the power, powerhog. Fastest card, but if 2070 barely can do 1440/60, no way 3080 is 4k 120, not possible, low texture esport games only.
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I wasn't the one starting with the console-to-PC comparisons here, I was just responding. Besides, comparing the upcoming consoles to previous generations is ... tricky. Not only are they architecturally more similar to PCs than ever (true, the current generation are also X86+AMD GPU, but nobody ever gamed on a Jaguar CPU), but unlike last time both console vendors are investing significantly in hardware (the PS4 at launch performed about the same as a lower midrange dGPU, at the time a ~$150 product; upcoming consoles will likely perform similarly to the 2070S (and probably 3060), a $500 product). No previous console's performance has risen above a mid-range PC from the same era, but the upcoming ones are definitely trying to change that. Besides that, did I ever say that playing a game at on a console at one resolution was the same as playing the same game on a PC at a different one? Nice straw man you've got there. Stop putting words in my mouth. The point of making console comparisons this time around is that there has never been a closer match between consoles and PCs in terms of actual performance than what we will have this November.
I'm not putting words into your mouth, I know you weren't the one starting this train of thought. ;)

All I tried to say is, there's no point in drawing conclusions regarding VRAM requirements on PC based on how much RAM the newest Xbox and Playstation have (regardless of who started the conversation). Game devs can always tweak settings to make games playable on consoles, while on PC you have the freedom to choose from an array of graphics settings to suit your needs.
 
Joined
Dec 14, 2011
Messages
1,010 (0.21/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Corsair iCUE H115i Elite Capellix 280mm
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage Sabrent Rocket 1TB M.2
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Redragon K618 RGB PRO
Software Microsoft Windows 11 Pro (64-bit)
I hope the 3060Ti will have at least 12GB VRAM. :)
 
Joined
Feb 3, 2017
Messages
3,732 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
All I tried to say is, there's no point in drawing conclusions regarding VRAM requirements on PC based on how much RAM the newest Xbox and Playstation have (regardless of who started the conversation). Game devs can always tweak settings to make games playable on consoles, while on PC you have the freedom to choose from an array of graphics settings to suit your needs.
VRAM requirement today depends most notably on one thing - texture pool, usually exposed as texture quality setting. Game assets - such as textures - are normally the same (or close enough) across different platforms. Texture pool nowadays is almost always dynamic, there is an allocated bunch of VRAM where textures are constantly loaded into and unloaded based on whatever scheme dev deemed good enough.
 
Joined
Sep 17, 2014
Messages
22,341 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
VRAM requirement today depends most notably on one thing - texture pool, usually exposed as texture quality setting. Game assets - such as textures - are normally the same (or close enough) across different platforms. Texture pool nowadays is almost always dynamic, there is an allocated bunch of VRAM where textures are constantly loaded into and unloaded based on whatever scheme dev deemed good enough.

Its a bit like how World of Warcraft evolved. No matter how silly you play, everyone can feel like their stuff runs like a boss and call it the real thing.
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Damnit all this talk of VRAM compression makes me wanna ask w1zzard to do a generational testing with the biggest VRAM card of each gen, once we have big navi and the 3090 out to see

1. How much gen on gen improvement there is in each camp
2. how much VRAM he can eat out of a 24GB card
3. what it takes to finally make him cry
Be aware that the VRAM usage reported by the driver doesn't tell you everything, as the actual memory usage can vary a lot during a single frame. Temporary buffers are used a lot during multiple render passes, and are usually allocated once, but flushed and compressed during a single frame. Also, be aware that some games may allocate a lot of extra VRAM without it being strictly necessary.

The best way to find out whether you have enough VRAM or not is to find the spot where you can measure symptoms of insufficient VRAM, primarily by measuring frame time consistency. Insufficient VRAM will usually cause significant stutter and sometimes even glitching.
 
Joined
Jul 5, 2013
Messages
27,473 (6.63/day)
Having to do that with a high end card that you bought today a year or two from now, it would be kind of pathetic.
No, it wouldn't. Software devs often create titles that push the performance envelope further than is currently viable. Crysis anyone? At a time when 1080p was becoming the standard, they released a game that would only run well if you ran at 720p with settings turned down and that was on brand new, top shelf hardware. Even the very next generation of hardware struggled to run Crysis at 1080p@60hz. Things have not changed. This simple and universal rule is: If you're not getting the performance you desire, adjust your settings down until performance is acceptable to you. Everyone has a different idea of what "acceptable" actually is.

As for the debate that keeps repeating itself with every new generation of hardware, RAM. Folks, just because @W1zzard did not encounter any issues with 10GB of VRAM in the testing done at this time does not mean that 10GB will remain enough in the future(near or otherwise). Technology always progresses. And software often precedes hardware. Therefore, if you buy a card now that has a certain amount of RAM on it and you just go with bog standard you may find yourself coming up short in performance later. 10GB seems like a lot now. But then so did 2GB just a few years ago. Shortsighted planning always ends poorly. Just ask the people that famously said "No one will ever need more that 640k of RAM."

A 3060, 3070 or 3080 with 12GB, 16GB or 20GB of VRAM is not a waste. It is an option that offers a level of future proofing. For most people that is an important factor because they only ever buy a new GPU every 3 or 4 years. They need all the future-proofing they can get. And before anyone says "Future-proofing is a pipe-dream and a waste of time.", put a cork in that pie-hole. Future-proofing a PC purchase by making strategic choices of hardware is a long practiced and well honored buying methodology.
 
Last edited:

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.37/day)
12, 16, 20GB are not going to make sense.
20GB is $300 more expensive, for what. so that at unclear point in the future, when it would be obsolete anyway, so you could be granted one more game that runs a little better.
16GB price is very close to the 10GB card so you have to trade 30% less performance for 6GB more. I mean you are deliberately going to make a worse choice again because of some future uncertainty.
12GB only AMD will make a 192 bit card at this point. 3060 will be as weak as 2070. so why put 12GB on it.
 
Joined
Jul 5, 2013
Messages
27,473 (6.63/day)
20GB is $300 more expensive
16GB price is very close to the 10GB card
Rubbish! You don't and can't know any of that. Exact card models and prices have not been announced. Additionally, history shows those conclusions have no merit.

Example? The GTX770. The 2GB version is pretty much unusable for modern games, but the 4GB versions are still relevant as they are still playable because of the additional VRAM. Even though the GPU dies are the same, the extra VRAM make all the difference. And it made a big difference then too. The cost? A mere $35 extra over the 2GB version. The same has been true throughout the history of GPU's regardless of who made them.
 

Nkd

Joined
Sep 15, 2007
Messages
364 (0.06/day)
Ofcourse they will. But people seem to forget and not realize that its already a known fact the 2gb chips won't be available until 2021. So we probably won't see these cards until January or December the earliest if they rush them.
 
Joined
Mar 10, 2015
Messages
3,984 (1.13/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Didnt that same game exist during the release of a 2080ti?

How much did the 2080 have that this is a direct replacement for though? Really gained vram this gen.
 
Joined
May 3, 2018
Messages
2,881 (1.21/day)
Big Navi not even released and they are panicking already. Would have thought this would be part of the mid-life update. Anyway 10Gb or 20GB you won't be getting one soon.
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
A 3060, 3070 or 3080 with 12GB, 16GB or 20GB of VRAM is not a waste. It is an option that offers a level of future proofing. For most people that is an important factor because they only ever buy a new GPU every 3 or 4 years. They need all the future-proofing they can get. And before anyone says "Future-proofing is a pipe-dream and a waste of time.", put a cork in that pie-hole. Future-proofing a PC purchase by making strategic choices of hardware is a long practiced and well honored buying methodology.
That isn't really the question though - the question is whether the amount of VRAM will become a noticeable bottleneck in cases where shader performance isn't. And that, even though this card is obviously a beast, is quite unlikely. Compute requirements typically increase faster than VRAM requirements (if for no other reason that the amount of VRAM on common GPUs increases very slowly, forcing developers to keep VRAM usage somewhat reasonable), so this GPU is far more likely to be bottlenecked by its core and architecture rather than having "only" 10GB of VRAM. So you'll be forced to lower settings for reasons other than running out of VRAM in most situations. And, as I said above, with a VRAM pool that large, you have massive room for adjusting a couple of settings down to stay within the limitations of the framebuffer should such a situation occur.

Your comparison to the 770 is as such a false equivalency: that comparison must then also assume that GPUs in the (relatively near) future will have >10GB of VRAM as a minimum, as that is what would be required for this amount to truly become a bottleneck. The modern titles you speak of need >2/<4 GB of VRAM to run smoothly at 1080p. Even the lowest end GPUs today come in 4GB SKUs, and two generations back, while you did have 2 and 3GB low-end options, nearly everything even then was 4GB or more. For your comparison to be valid, the same situation must then be true in the relatively near future, only 2GB gets replaced with 10GB. And that isn't happening. Baseline requirements for games are not going to exceed 10GB of VRAM in any reasonably relevant future. VRAM is simply too expensive for that - it would make the cheapest GPUs around cost $500 or more - DRAM bit pricing isn't budging. Not to mention that the VRAM creep past 2GB has taken years. To expect a sudden jump of, say, >3x (assuming ~3GB today) in a few years? That would be an extremely dramatic change compared to the only relevant history we have to compare to.

Besides, you (and many others here) seem to be mixing up two similar but still different questions:
a) Will 10GB be enough to not bottleneck the rest of this GPU during its usable lifespan? (i.e. "can it run Ultra settings until I upgrade?") and
b) Will 10GB be enough to not make this GPU unusable in 2-3 generations? (i.e. "will this be a dud in X years?")

Question a) is at least worth discussing, and I would say "maybe not, but it's a limitation that can be easily overcome by changing a few settings (and gaming at Ultra is kind of silly anyhow), and at those settings you'll likely encounter other limitations beyond VRAM". Question b), which is what you are alluding to with your GTX 770 reference, is pretty much out of the question, as baseline requirements (i.e. "can it run games at all?") aren't going to exceed 10GB in the next decade no matter what. Will you be able to play at 4k medium-high settings at reasonable frame rates with 10GB of VRAM in a decade? No - that would be unprecedented. But will you be able to play 1080p or 1440p at those types of settings with 10GB of VRAM? Almost undoubtedly (though shader performance is likely to force you to lower settings - but not the VRAM). And if you're expecting future-proofing to keep your GPU relevant at that kind of performance level for that kind of time, your expectations of what is possible is fundamentally flawed. The other parts of the GPU will be holding you back far more than the VRAM in that scenario. If the size of your framebuffer is making your five-year-old high-end GPU run at a spiky 10fps instead of, say, 34, does that matter at all? Unless the game in question is extremely slow-paced, you'd need to lower settings anyhow to get a reasonable framerate, which will then in all likelihood bring you below the 10GB limitation.

I'm all for future-proofing, and I absolutely hate the shortsighted hypermaterialism of the PC building scene - there's a reason I've kept my current GPU for five years - but adding $2-300 (high end GDDR chips cost somewhere in the realm of $20/GB) to the cost of a part to add something that in all likelihood won't add to its longevity at all is not smart future-proofing. If you're paying that to avoid one bottleneck just to be held back by another, you've overpaid for an unbalanced product.
 
Joined
Sep 17, 2014
Messages
22,341 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
That isn't really the question though - the question is whether the amount of VRAM will become a noticeable bottleneck in cases where shader performance isn't. And that, even though this card is obviously a beast, is quite unlikely. Compute requirements typically increase faster than VRAM requirements (if for no other reason that the amount of VRAM on common GPUs increases very slowly, forcing developers to keep VRAM usage somewhat reasonable), so this GPU is far more likely to be bottlenecked by its core and architecture rather than having "only" 10GB of VRAM. So you'll be forced to lower settings for reasons other than running out of VRAM in most situations. And, as I said above, with a VRAM pool that large, you have massive room for adjusting a couple of settings down to stay within the limitations of the framebuffer should such a situation occur.

Your comparison to the 770 is as such a false equivalency: that comparison must then also assume that GPUs in the (relatively near) future will have >10GB of VRAM as a minimum, as that is what would be required for this amount to truly become a bottleneck. The modern titles you speak of need >2/<4 GB of VRAM to run smoothly at 1080p. Even the lowest end GPUs today come in 4GB SKUs, and two generations back, while you did have 2 and 3GB low-end options, nearly everything even then was 4GB or more. For your comparison to be valid, the same situation must then be true in the relatively near future, only 2GB gets replaced with 10GB. And that isn't happening. Baseline requirements for games are not going to exceed 10GB of VRAM in any reasonably relevant future. VRAM is simply too expensive for that - it would make the cheapest GPUs around cost $500 or more - DRAM bit pricing isn't budging. Not to mention that the VRAM creep past 2GB has taken years. To expect a sudden jump of, say, >3x (assuming ~3GB today) in a few years? That would be an extremely dramatic change compared to the only relevant history we have to compare to.

Besides, you (and many others here) seem to be mixing up two similar but still different questions:
a) Will 10GB be enough to not bottleneck the rest of this GPU during its usable lifespan? (i.e. "can it run Ultra settings until I upgrade?") and
b) Will 10GB be enough to not make this GPU unusable in 2-3 generations? (i.e. "will this be a dud in X years?")

Question a) is at least worth discussing, and I would say "maybe not, but it's a limitation that can be easily overcome by changing a few settings (and gaming at Ultra is kind of silly anyhow), and at those settings you'll likely encounter other limitations beyond VRAM". Question b), which is what you are alluding to with your GTX 770 reference, is pretty much out of the question, as baseline requirements (i.e. "can it run games at all?") aren't going to exceed 10GB in the next decade no matter what. Will you be able to play at 4k medium-high settings at reasonable frame rates with 10GB of VRAM in a decade? No - that would be unprecedented. But will you be able to play 1080p or 1440p at those types of settings with 10GB of VRAM? Almost undoubtedly (though shader performance is likely to force you to lower settings - but not the VRAM). And if you're expecting future-proofing to keep your GPU relevant at that kind of performance level for that kind of time, your expectations of what is possible is fundamentally flawed. The other parts of the GPU will be holding you back far more than the VRAM in that scenario. If the size of your framebuffer is making your five-year-old high-end GPU run at a spiky 10fps instead of, say, 34, does that matter at all? Unless the game in question is extremely slow-paced, you'd need to lower settings anyhow to get a reasonable framerate, which will then in all likelihood bring you below the 10GB limitation.

I'm all for future-proofing, and I absolutely hate the shortsighted hypermaterialism of the PC building scene - there's a reason I've kept my current GPU for five years - but adding $2-300 (high end GDDR chips cost somewhere in the realm of $20/GB) to the cost of a part to add something that in all likelihood won't add to its longevity at all is not smart future-proofing. If you're paying that to avoid one bottleneck just to be held back by another, you've overpaid for an unbalanced product.

I'm just going to leave this, 1440p, 60 FPS Skyrim SE video here and leave you all to your thoughts on this dilemma. Enjoy that 10GB "4K" card at 1440p. In the 'day before yesterday's' content at a lower res than what the card is marketed for.

Its not exactly the most niche game either... I guess 'Enthusiast' only goes as far as the limits Nvidia PR set for you with each release? Curious ;) Alongside this, I want to stress again TWO previous generations had weaker cards with 11GB. They'd run this game better than a new release, most likely.

Since its a well known game, we also know how the engine responds when VRAM is short. You get stutter, and its not pretty. Been there done that way back on a 770 with measly 2GB. Same shit, same game, different day.

So sure, if all you care about is playing every console port and you never mod anything, 10GB will do you fine. But a 3070 will then probably do the same thing for you, won't it? Nvidia's cost cutting measure works both ways, really. We can do it too. Between a 3080 with 20GB (at whatever price point it gets) and a 3070 with 8GB, the 3080 10GB is in a very weird place. Add consoles and a big Navi chip with 16GB and it gets utterly strange, if not 'the odd one out'. A bit like the place where SLI is now. Writing's on the wall.
1600677329663.png


How much did the 2080 have that this is a direct replacement for though? Really gained vram this gen.

The 2080 is also 40-50% slower. 20% VRAM, 40-50% core? Sounds like good balance after the 2080S was already rivalling 1080ti's core power while being 3GB short.

The balance is shifting and its well known there is tremendous pressure on production lines for high end RAM. We're paying the price with reduced lifecycles on expensive product. That is the real thing happening here.
 
Last edited:
Top