• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

How much does VRAM matter?

Joined
Jun 7, 2016
Messages
252 (0.08/day)
Location
Norway
System Name none
Processor AMD R9 9950X
Motherboard GIGABYTE B650E AORUS MASTER
Cooling ARCTIC Liquid Freezer III 420
Memory 32GB G.SKILL @6400MT/s CL32-36-36-28-68 tweaked sub-timings
Video Card(s) ASUS TUF 4090
Storage Samsung PM9A1
Display(s) SAMSUNG 32" IDK WHAT
Case Corsair 7000D
Audio Device(s) none
Power Supply Corsair HX1500
Mouse Corsair
Keyboard Corsair K95 Platinum
Software Windows 11
Benchmark Scores Check out my YT channel, Technology Hive: https://www.youtube.com/channel/UCeHYX8NGoRj-kZiumUhUsJw
Good video...
 
When I run GTA V my videocard is using 4100~4200mb vram @1080p 60fps.

We all know that system ram and vram matters btw, especially with new games coming out...Year after year...:)
 
When I run GTA V my videocard is using 4100~4200mb vram @1080p 60fps.
Same here. Sometimes it hits 4.5gb. GTA 5 loves VRam.
 
VRAM inflate since mid-2014 begin with Watch Dog.
 
When I run GTA V my videocard is using 4100~4200mb vram @1080p 60fps.

We all know that system ram and vram matters btw, especially with new games coming out...Year after year...:)
Same here. Sometimes it hits 4.5gb. GTA 5 loves VRam.
probably ROTTR syndrome where the engine is coded to use "up" to the maximum vRAM available but still run on a 2gb card

i.e: Rise of the Tomb Raider (ROTTR)
on my, "R.I.P" 980 4g (4096) : 4016mb at 1080p ultra
on my, GT860m 2g (2048) : 2018mb at 1080p high (still 2018mb in ultra but waaayyy slower :laugh: )

a 4gb vRAM and 16gb sysRAM is the minimum now but ... still the "standard" for smooth gameplay and fair amount of eye candy

(btw the video could have ran on a 4gb card no issues 4030mb max is not 4gb ;) it's a little little little under 4gb :roll: )
 
Just finished Batman Arkham Knight on my 1GB 7750, first time I see a noticeable speed-up when overclocking the system RAM.
 
VRAM capacity is one of those things that doesn't matter until you don't have enough of it.
 
Textures are a big determinant of VRAM consumption. They're the images that cover all those polygons in the game world. Your graphics card stores them in its memory because those files have to be accessed quickly (and their file size scale quadratically with quality; compare say a 2000x2000 texture versus a 4000x4000 one). Most of the other settings like anti-aliasing, shadows, and lights are more straight up GPU computations.

Notice how in the GTX 580 (comes in 1.5 and 3 GB versions) video, the Texture Quality is set to "Normal". So yeah, no pretty textures for you if you can't supply the VRAM.
 
Textures are a big determinant of VRAM consumption. They're the images that cover all those polygons in the game world. Your graphics card stores them in its memory because those files have to be accessed quickly (and their file size scale quadratically with quality; compare say a 2000x2000 texture versus a 4000x4000 one). Most of the other settings like anti-aliasing, shadows, and lights are more straight up GPU computations.

Notice how in the GTX 580 (comes in 1.5 and 3 GB versions) video, the Texture Quality is set to "Normal". So yeah, no pretty textures for you if you can't supply the VRAM.

I run a GTX 670, but it's there as everyone likes to over blow the Vram argument ;)
 
Notice how in the GTX 580 (comes in 1.5 and 3 GB versions) video, the Texture Quality is set to "Normal". So yeah, no pretty textures for you if you can't supply the VRAM.
I encountered that first hand. Just before I got my 390 I was starting to encounter low frame rates because streaming textures were running rampant to both of my old 6870s in CFX. Now, with the 3820, having quad-channel memory and 16x 2.0 lanes for each card mitigated the problem for a while. So much where I was running as much as 1.8GB with two 1GB cards until it started to cause issues but, when I ran just one of them with the same amount of streaming textures it ran silky smooth because it was streaming half as much than with both in CFX.

So, with that said, my general observation has been that VRAM doesn't matter nearly as much if you're running a single GPU as you can usually get away with streaming textures of main memory but, if you run out with 2 or more GPUs, you're looking at a lot more data getting streamed from system memory to your GPUs (proportional to the number of GPUs you're running,) and things like system memory bandwidth and PCI-E lanes available to the GPU means a lot more than it did before.
 
Why use VerticalRAM when you could use cheap ol' RegularRAM and use it more efficiently?:banghead:
 
VRAM capacity is one of those things that doesn't matter until you don't have enough of it.

All so a lot of games take the vram and not even need to, they just do anyways.

Mean while.


The video that the OP posted had X8 AA, and from what i noticed from yours it was disabled.
 
All so a lot of games take the vram and not even need to, they just do anyways.



The video that the OP posted had X8 AA, and from what i noticed from yours it was disabled.

Indeed since the 580 only has 1.5GB Vram, unless you are dumb, you leave MSAA off.

Still the game does not look much better between MSAA and FXAA, nor is it magically turned into Crysis 3 by putting textures one step over normal lol.
 
At least nowadays VRAM matters. It matters a lot, especially if you play in 4k. I'm speaking for modern demanding games like GTA V.

Simply put...................VRAM is much much faster that the regular DRAM and it's also closer to the GPU. Game loads textures into VRAM so that GPU can process and show them on your monitor. If you ran out of VRAM your system probably won't crash but you will run your game like a slideshow because the game will start loading textures int DRAM which is much slower than VRAM. Any form of anti-aliasing also consumes a lot of VRAM.
 
depends greatly on the games we play & what kinds of settings we run. Safest route for me is run most settings at Very High or Ultra (if the game is optimized) without the need of AA. That should give me above 60fps since AA only makes lines blurry & a little distorted, which kinda annoys me some time IMO...
 
Instead of making things clever, like utilizing efficient texture compression algorithms, we just cram more VRAM on graphic cards and call it a day.

Back in the days, they utilized advanced compression and make textures sharper (anyone remembers crazy sharp textures from UT99 second CD with S3TC textures?) on graphic cards with freaking 8MB than we have them now using freaking 8GB of VRAM. It seems like they don't even use texture compression anymore. They just seem to downscale textures for lower detail levels and for Ultra they just use uncompressed textures.
 
Gta5 is nothing. Even doom4 uses more but still nothing to cry about.

So far only rotr and newest cod uses as much as it can, thats about it. rest runs fine even with 3gb vram - I had 780gtx and saw that..
 
Gta5 is nothing. Even doom4 uses more but still nothing to cry about.

So far only rotr and newest cod uses as much as it can, thats about it. rest runs fine even with 3gb vram - I had 780gtx and saw that..

Well I'm glad I have 8GB vram and 16GB of system memory :)
I shouldn't be running out of any anytime soon with eye-candy and MSAA.:D
 
anyone remembers crazy sharp textures from UT99 second CD with S3TC textures?
Oh yeah, also UT99 also had 2 sets of textures for each surface, regular texture and detail texture that was tiled, monochromatic and had high frequency noise, if you came close to the surface the detail texture would gradually become more visible. IMO that gave the most eye candy back in '99 given how low res textures were.
Today too often they slap the single 4K texture for every channel (diffuse/albedo, specular/gloss, roughness/normal, reflectiveness/emission) all collected by photogrammetry and imported as such ... that could all be done more optimally for VRAM with one set of 2K textures and other set of 512K tileable detail textures that show in closeups.
 
Oh yeah, also UT99 also had 2 sets of textures for each surface, regular texture and detail texture that was tiled, monochromatic and had high frequency noise, if you came close to the surface the detail texture would gradually become more visible. IMO that gave the most eye candy back in '99 given how low res textures were.
Today too often they slap the single 4K texture for every channel (diffuse/albedo, specular/gloss, roughness/normal, reflectiveness/emission) all collected by photogrammetry and imported as such ... that could all be done more optimally for VRAM with one set of 2K textures and other set of 512K tileable detail textures that show in closeups.

Yep, they just let the LoD do the work for them, giving that nasty texture pop-in.
 
Yep, they just let the LoD do the work for them, giving that nasty texture pop-in.
LoD is great if done properly like in Witcher 3 where you don't ever notice a LoD transition ... that's when you put some effort in each of many of levels of detail for each 3d model.
Pop-ins happen when you do unmanned batch processing of all 3d meshes in one go by an automated process to create levels of detail.
Much faster and cheaper but alas it also looks cheap.
Additionally (for me to be on topic), LoD does wonders to reduce amount of geometry (polygons) in the pipeline, not at all for reduction of VRAM usage - after all extra levels of geometry detail reside in VRAM.
 
Back
Top