# GPUz vRAM use. What is it really showing us?



## EarthDog (Dec 18, 2018)

W1z,

We've seen a couple of people (@John Naylor in particular) mention that applications such as GPUz and MSI Afterburner do not read actual memory used in GPUs, but the allocation (it was compared to a credit limit versus credit used). After multiple attempts to have the guy clarify, it has never happened.  Can you clarify that point? At least for GPUz?

It seems odd that if these applications are reading allocated versus actual used that they update so frequently, the allocated amount. It would make sense that if it is an allocation, like a page file for example, that the size doesn't vary much, particularly it doesn't LOWER the allocation upon refresh rate.

If it is an allocation, how far off do you image actual use to be? 

Anyway, just looking for clarity on what exactly the memory used in GPUz is reading and some details.

Much appreciated.


----------



## moproblems99 (Dec 18, 2018)

EarthDog said:


> W1z,
> 
> We've seen a couple of people (@John Naylor in particular) mention that applications such as GPUz and MSI Afterburner do not read actual memory used in GPUs, but the allocation (it was compared to a credit limit versus credit used). After multiple attempts to have the guy clarify, it has never happened.  Can you clarify that point? At least for GPUz?
> 
> ...



Excellent question!  Subbed because this is the part of tech that is interesting.

Honestly, to me, allocation == used.  GPUz shouldn't know if an empty memory cell is allocated and storing a null value or allocated and not used (and I doubt the GPU reports the difference).  This would prove interesting in something like the 970 where 3.8GB is allocated.  Does just allocating into that .5GB cause some slowdowns?

Enough of my blabbering as I don't want to clutter your thread.


----------



## LFaWolf (Dec 18, 2018)

My guess is he probably doesn’t  know, as he is probably reading from the video card bios on what is available and not available (or allocated). It is entirely up to the app/game to decide how much to ask for and how much to actually use. As a software developer I can go to the OS and ask for 1GB of memory. If it is available the OS will give it to me. Now that is probably not a good practice to ask for more than what you need, as your app may crash or fail to run. However, if it is a game app then possibly the developer may think anyone should only run one game at time, and ask for as much as possible. Just my thoughts.


----------



## EarthDog (Dec 18, 2018)

If the author doesn't know... I wonder how anyone knows (and keeps posting that info repeatedly)...LOL!

I just want to get to the bottom of it. When it was said, it didn't make sense, and like I said, the user who said it was beckoned multiple times over (once in this thread and has posted since!) this and hasn't taken the time to respond.


----------



## W1zzard (Dec 18, 2018)

Did some quick testing.


```
#define size 1024*1024*1024/8 // 1 GB buffer size
1)		cl_mem buffer_A = clCreateBuffer(m_context, CL_MEM_READ_ONLY, sizeof(double) * size, 0, &res);
2)		res = clEnqueueWriteBuffer(commandQueue, buffer_A, CL_TRUE, 0, sizeof(double) * size, hA, 0, NULL, NULL);
```

Reported VRAM usage before starting my program: 1296 MB
Executed line 1 - no change in memory usage
Executed line 2 - memory usage went up insantly, to 2319 MB, up by the expected 1 GB


----------



## EarthDog (Dec 18, 2018)

So that is telling us it is reading what is in use, not what is 'allocated'... did I read that correctly?


----------



## W1zzard (Dec 18, 2018)

EarthDog said:


> So that is telling us it is reading what is in use, not what is 'allocated'... did I read that correctly?


not sure how clCreateBuffer is implemented internally, whether it "allocates" by your definition or does something else. But yes, it looks like it


----------



## Gorstak (Dec 18, 2018)

well, I don't know about vram, but If you create a fixed size swap file, the selected size is just it, some kind of border being set. How much of it will be used and written/rewritten on depends on a million small details.


----------



## EarthDog (Dec 18, 2018)

W1zzard said:


> not sure how clCreateBuffer is implemented internally, whether it "allocates" by your definition or does something else. But yes, it looks like it


I wont lie.. I am not sure either. I was just using the analogies the poster mentioned when trying to describe how GPUz reads memory. I would have to dig up and quote the passage to be more accurate.



Gorstak said:


> well, I don't know about vram, but If you create a fixed size swap file, the selected size is just it, some kind of border being set. How much of it will be used and written/rewritten on depends on a million small details.


This is about vRAM though. A static page file is a different beast entirely...that said.....

When I mentioned the page file earlier, it was in reference to how it manages itself when set to dynamic. Basically, if it pushes past any line in the sand it stays at that same line in the sand/size until it is crossed again. Now, these are two different things, but IIRC, he mentioned the PF as sort of an analog.


----------



## MrGenius (Dec 18, 2018)

It fills up your VRAM as full as it thinks it needs to, with things it thinks it _might_  need in the near future. At any given time it's calling on a much smaller amount of those things it needs to use _right now_. While at the same time it's deciding to get rid of some things from the VRAM that it no longer sees that it might need in the near future. And filling the VRAM back up with other things it thinks it might soon need instead. Or not, because it doesn't think it might need anything more than what's in the VRAM currently. Keeping as much space as possible free is a good idea too. It wants to use as much as it thinks is a good idea to. But using too much is a waste(or serves no purpose). This is how it keeps your 3D apps running smoothly. And why the "allocated" amount of VRAM is ever-changing. It's being used in a highly efficient and highly effective manner.

Short answer: What's allocated is being used. It's not all being used at once. Because that's not even possible(in a typical scenario the allocated amount is way too big to be used in full at any given time). And would not be an efficient and effective way to use the VRAM.


----------



## moproblems99 (Dec 18, 2018)

EarthDog said:


> So that is telling us it is reading what is in use, not what is 'allocated'... did I read that correctly?



It tells me that allocated == used.  Granted, I am relating everything back to cpu memory.  When you allocate memory, it is not usable by another application.  In that regards, it doesn't matter if it has a 1 or a zero in it because another application cannot use the space.


----------



## EarthDog (Dec 18, 2018)

Right... and I think overall I either misunderstood the point or the analogies the guy was trying to make... here is the support....




> https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x
> 
> These tools don't _"actually report how much VRAM the GPU is actually using — instead, it reports the amount of VRAM that a game has requested. We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, ..... They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available"._



I can refute the statement that 'cards with larger memory will request more memory' as a blanket statement. It seems to depend in title and amount of vram on the card. But I've seen 4/8/11 GB cards allocate the same amount of memory in most titles... does everyone else share that sentiment?


----------



## Vya Domus (Dec 18, 2018)

W1zzard said:


> ```
> #define size 1024*1024*1024/8 // 1 GB buffer size
> 1)        cl_mem buffer_A = clCreateBuffer(m_context, CL_MEM_READ_ONLY, sizeof(double) * size, 0, &res);
> 2)        res = clEnqueueWriteBuffer(commandQueue, buffer_A, CL_TRUE, 0, sizeof(double) * size, hA, 0, NULL, NULL);
> ```



It seems that for OpenCL allocation and memory copies are distinct operations. From what I can gather clCreateBuffer with the CL_MEM_READ_ONLY flag basically only checks if the request amount of memory is available and does nothing else, no copying , no initialization, nothing. It would make sense that you would not see any change in terms of memory usage because by that point (first line) you haven't touched the memory at all.

This line should result in an immediate increase in memory usage in one go : (or at least that's what the CL_MEM_COPY_HOST_PTR flag should do)

cl_mem buffer_A = clCreateBuffer(m_context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof(double) * size, hA, NULL);


----------



## MrGenius (Dec 19, 2018)

Here's something I was reading the other day that speaks to my point.


> If a game has *allocated 3GB* of graphics memory it might be *using only 500MB* *on a regular basis* with much of *the rest only there for periodic, on-demand use.* Things like compressed textures that are not as time sensitive as other material require much less bandwidth and can be moved around to other memory locations with less performance penalty. Not all allocated graphics memory is the same and inevitably *there are large sections of this storage that is reserved but rarely used at any given point in time.*


https://www.pcper.com/reviews/Graph...Full-Memory-Structure-and-Limitations-GTX-970


----------



## moproblems99 (Dec 19, 2018)

EarthDog said:


> Right... and I think overall I either misunderstood the point or the analogies the guy was trying to make... here is the support....
> 
> https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x
> 
> ...



Your quote is the surefire way to identify a lazy developer.  At the very least, it identifies a developer that doesn't follow the guidelines of 'memory is precious, take only what you need and use what you take.'  Or, the developer eats at Chinese Buffets.

Edited for poor typing.

Also, I don't like to call out developers as I am a developer myself and we constantly run into time crunches.  But, it takes more time to figure out how much ram is available and take more than you need than it does to request what you need.

Edited again.  Also, it doesn't matter if the application is 'using' the allocated memory or not.  Once you allocate it, it is yours.  You are using it.  This is why we need 12GB gpus and 32GB of system ram.  Wonder if those devs happen to work at Bethesda....


----------



## EarthDog (Dec 19, 2018)

MrGenius said:


> Here's something I was reading the other day that speaks to my point.


I can see the difference...and maybe it's just me... but rarely used is still used at some point, yeah?

It feels safe to say that these applications give a pretty good idea, if high level only info, of what is going on with vram consumption.. I'd have to imagine there would be some form of penalty if this data wasn't there already for "periodic" or even "rare" recalls of it. I did expect some fat to be left on the bone in the allocation versus what it's actually crunching every clock cycle.


----------



## W1zzard (Dec 19, 2018)

I think the issue is with your definition of "use"


----------



## _UV_ (Dec 19, 2018)

2 years ago with 2GB GTX 680 i had issues and hard lags (up to 10 seconds)  in online games if "usage" reported by monitoring programs get past 1980MB and software (either browser or game or something else) tries to allocate a bit more. So it's definitely used VRAM, allocated on request for this particular software and not available for others. No matter it actually hardly used by this software.

If you run some game you will see how usage amount of VRAM changes during different activity like loading new levels. Same goes to regular software, open additional tab or window and you will see increased size, close and it will be less VRAM.


----------



## EarthDog (Dec 19, 2018)

W1zzard said:


> I think the issue is with your definition of "use"


Elaborate. 

I think I have a handle on it now.  The bottom line, to me, is that these show memory 'use' more accurately then they dont. Surely there is some static data in there that may be rarely used, but use is use.


----------



## Vya Domus (Dec 19, 2018)

I don't even know why would it even matter if the data loaded into memory is used frequently or not. It's there, that's all that matters, what's the point of this debate ?


----------



## EarthDog (Dec 19, 2018)

Well, the first post explains exactly why this thread was created...as a refresher, a user made repeated claims about the inaccuracies of these applications in measuring vram use so I made a thread seeing what the actual story was.

Hopefully now, misinformation about the application's inaccuracies will stop being shared. 

Capeesh? 

Example... someone says "You can confirm by looking in MSI AB or GPUz, how much memory you are using...... "

and the reply is, "Actually you can not. I hesitate to say that people are wrong when this statement is made .... they have simply been miinformed, not because they aren't useful tools but because these utilities do not actually do all that folks think they do. These utilities measure VRAM allocation, not usage." or "but there is no way to measure actual usage as no tool is available which does this. "

In the end it seems like it is splitting hairs a bit. Since use and allocation don't seem to be a big difference ("rare" is still being used, yes?) saying users are not able to get a good idea of their vram use/allocation from these applications seems incorrect.


----------



## phanbuey (Dec 19, 2018)

The question is not whether usage == allocation.  The question is whether or not applications "FILL UP" or "USE" more ram because they can, as a 'best practice' to speed up performance.  Examples: Windows Superfetch, Microsoft SQL... if you have 32 gb, it will use all 32 gb and preload the most accessed data in there.

What I understood Naylor to say was "you don't need as much ram as your applications say you do because programs just load extra stuff in there and use it because they can"

There is no evidence to suggest that games behave this way.  It's not about use, it's about game / engine / driver behavior when more vram is given to them; and I don't see differences between vram sizes on cards.  Differences are by resolution, texture quality etc.

While this kind of caching is common for databases and database engines, if this was the case for games, then after several hours of gaming, your vram would be pegged completely (as can be seen by looking at the ram usage of any database server).  And GPU-Z would always report 90-100% utilization after several hours of game.  It seems that engines only put into vram what they will need in for the immediate future; and only start nitpicking and paging to ram when they run out of vram.


----------



## EarthDog (Dec 19, 2018)

Some games will put more memory to use depending on the available memory though. This isnt a terribly common thing, but does happen. 

We already know there is fat on the bone. How much will vary.


----------



## phanbuey (Dec 19, 2018)

EarthDog said:


> Some games will put more memory to use depending on the available memory though. This isnt a terribly common thing, but does happen.
> 
> We already know there is fat on the bone. How much will vary.



More system ram possibly, but more vram?  What game does that?

I know games like GTA 5 that will auto configure draw distances and some quality settings based on how much vram you have, but they always behave the same given the same settings regardless of your card size.


----------



## EarthDog (Dec 19, 2018)

vRAM. I dont recall the titles, but I've seen it before in reviews. Again, it isnt common, but happens.

The whole point here is the viability of gpuz and msi ab in reading ram use/allocation. And it looks like it is viable, contrary what has been posted a couple of times around tpu.


----------



## W1zzard (Dec 19, 2018)

EarthDog said:


> Elaborate.


Some people argue that "use" means "actively reading/writing from at a specific instant in time" (which has no meaning in this context i'd claim)


----------



## EarthDog (Dec 19, 2018)

W1zzard said:


> Some people argue that "use" means "actively reading/writing from at a specific instant in time" (which has no meaning in this context i'd claim)


kk...  were on the same page.

Seems wrong to say gpuz and MSI ab arent really viable for vRAM use/allocation... which is what im trying to get to the bottom of. 

Do you agree with that sentiment of gpuz and msi ab being fine or at least 'good enough'to display ram use/allocation?


----------



## Vayra86 (Dec 19, 2018)

EarthDog said:


> vRAM. I dont recall the titles, but I've seen it before in reviews. Again, it isnt common, but happens.
> 
> The whole point here is the viability of gpuz and msi ab in reading ram use/allocation. And it looks like it is viable, contrary what has been posted a couple of times around tpu.



Its viable but not to the point of being able to state "it uses X on card A, so you need X on card B'. In that, @John Naylor and others got the right idea.

But at the same time, the bigger the gap between allocated maximums and the VRAM capacity on each card, the higher the chances and frequency of swaps, and depending on game and engine, the greater the chance it results in a performance hit.

So yes the readout may be accurate to some degree, but its usefulness is really on a case by case basis I think.


----------



## W1zzard (Dec 19, 2018)

Vayra86 said:


> Its viable but not to the point of being able to state "it uses X on card A, so you need X on card B'.


True, good point. Because games detect GPU / GPU arch / VRAM  size and run different code paths. This is similar to comparing memory usage for  Photoshop 32-bit with Photoshop 64-bit



EarthDog said:


> Seems wrong to say gpuz and MSI ab arent really viable


If they weren't viable such a feature wouldn't exist.


----------



## phanbuey (Dec 19, 2018)

a 1060 3/6 GB vs 580 4/8 GB would be a perfect guinea pig test for this.


----------



## EarthDog (Dec 19, 2018)

Thank you. Hopefully moving forward we dont see this again as a reply to suggesting gpuz to see memory use/allocation. 



> ...Actually you can not. I hesitate to say that people are wrong when this statement is made .... they have simply been miinformed...


----------



## John Naylor (Dec 19, 2018)

EarthDog said:


> Right... and I think overall I either misunderstood the point or the analogies the guy was trying to make... here is the support....
> 
> I can refute the statement that cards with larger memory will request more memory' as a blanket statement. It seems to depend in title and amount of vram on the card. But I've seen 4/8/11 GB cards allocate the same amount of memory in most titles... does everyone else share that sentiment?



Thank you !  for conforming exactly what I am saying.   It most certainly,as you said, will "depend on title and _*a*_*mount of vram on the card*".   So if the the game sees 4 GB, it might allocate 2.5 GB ... if it sees 8 GB it might allocate 4.5 GB.

I have posted multiple references multiple times, perhaps reading the would have helped.  Let's start here 1st with the RAM amount issue:

http://alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/



> There is one last thing to note with _Max Payne 3_:  It would not normally allow one to set 4xAA at 5760×1080 with any 2GB card as it ***claims to require 2750MB***.  However, when we replaced the 4GB GTX 770 with the 2GB version, the game allowed the setting.  And there were no slowdowns, stuttering, nor any performance differences that we could find between the two GTX 770s.



Now the entire review contradicts what everyone was saying back then .... "make sure you get the 4 GB model".  Those test shows without any shadow of date that 2 or 4GB was irrelevant.  The only time that VRAM mattered was when settings were high enough as to make the game was unplayable .  Whoo hooo 4 GB model got me 11.8% more fps in Sleeping Dogs at 2560 x 1600 ... who cared, you went from 13.5 fps to 15.9, ... unplayable with 2 or 4 GB.    Same results with 6xx series by Puget Sound, same with, same rsults at Guru3D with 9xx,  same results by Extremetech, same results in every  review of this type I have ever read....same resukts with 10xx here on TPU.  Most games are not affected at 1080p and most of those at 1440p.   No doubt if ya search long and hard enough you can find a game which bucks the results, hard to explain why, but a porr console port is one known cause as the process gives little concern to such things.

Clearly, in the case of Max Payne, the GPU was requesting 2.750 GB and therefore would not install with only 2 GB, and yet it played with no discernable difference .... no o significant difference in performance, image quality, or user experience between them.  Same was found to be true with other links previous posted for Puget Sound, Guru3D, ExtremeTech and Techpowerups.  The simple fact is most games are not affected at 1080p with 3 GB, the ones that are are anomalies.... look at it this way, if 3 GB isn't enough or 1080p, then no card exists that has enough VRAM for 2160  ... aka 4 x 1080p.  If 3 GB id no good at 1080p, then with 4 times the pixels, 4 x 3GB is no good for 2160p.    The data is there, just have to read it.

In Wizzard's MSI 1060 3Gb review, he noted unexpected performance drops in Hitman and Tomb Raider, no explanation was presented nor do we have a means (that I can see) in determining why that is so given there are multiple differences between the cards.    Is it the shaders ?   is it the RAM, no way to tell.   But it is very clearly stated that _"Other games seem completely unaffected by having 3 GB less VRAM at their disposal, especially at 1080p"_.    I can't see any ambiguity in Wizzard's statement.   As was shown , repeatedly, if 3 GB is an issue at 1080p, then it absolutely must follow that it will be a bigger issue at 1440p.   Wizzard's test results show no significant difference in performance over the 18 game test suite between 1080p and 1440p, both showing 6% .   Even when we get to 2160p, where VRAM does have a perfomane impact, the games, most of the games are unplyable and the other barely so as the GPU can not keep up.

With regard to "VRAM Usage" as used in available utilities, nvidia is paraphrasing Indigo Montoya (Princess Bride) saying " that word doesn't mean what you think it means".   The link to nVidia's statement has been posted multiple times and no one has refuted it.   It does provide the only explanation I have heard that "works" with the results above.   If not, the other thread which birthed this one, has some issues as the OP say 2.0 to 2.5 GB IIRC, I'd like to see how it would possible show the same if installed witha  2 GB card.  I'll check .... I have a pair of 560 2Gb laying around, I will see what happens.

https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x



> We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, whether it’s GPU-Z, Afterburner, Precision, etc. *They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.*”



Note the following sentence in the article quoting Mr. Bell  *"Our own testing backed up this claim; VRAM monitoring is subject to a number of constraints."  *call it what you want ... allocation or usage but clearly just because you ***see*** 2.75 GB of RAM usage in a utility on a 4GB card does not mean the game won't run at the same fs, quality etc. on a  2 GB card.... regardless of what name you give it, it clearly is having no impact.  Let's change the semantics, and focus on the point, the allegation is that seeing usage of 2.X GB in GPU_z proves that the game is being impacted.  Clearly, in the referenced thread, GPUz is not going to show 2.X GB or VRAM usage if run on a  2 GB card.   The number in any utility proves nothing of the sort.  Max Payne showed this definitively and cinclusively .... the other 40 or so games in Alienbabeltech (7xx)  showed mo significant hit.  Same with Guru3D (9xx) , same with Puget Sound (6xx),  and other TPUs tests showed this in 16 /18 games.   Because of the difference in shaders, have not eliminated all other impacts on those two. 

If there's evidence to the contrary, I'd be anxious to read it.  The Max Payne experience clearly shows that the GPU requests more than it actually **needs** as it assigns 2.75 GB and won't allow game to run at those settings if not there.   Play the semantic game if ya will but on a 2 GB card, it clearly is not impacted by not having the  2.75 GB the GPU insists on allocating for the game. ... so strictly so that it will not allow you to run it.  If Mr. Bell is wrong abut allocation versus usage, then explain Max Payne.   Until given, will have to take it that nVidia knows their product.   Of all those commenting on the issue, Mr. Bell has the best resume.  In the end, the semantics do not matter, whether it's allocation, usage or keebler eleves, the numbers reported by any utility are no indication whatsover that the game's performance is impacted when  uitility reads 2.5 GB on a 3 GB card or even a 2 GB card.   Whether it's using it, allocating it or whatever, is irrelevant ... if it is [pick a word you prefer] X.Y GB in the utility, that is no indication as to whether it's  [pick a word you prefer] it because it needs that as a minimum or just grabbing in "just in case" ... or whatever.   It's not indicating that performance is in any way impacted if less is available.. and THAT is the only point of relevance.




phanbuey said:


> a 1060 3/6 GB vs 580 4/8 GB would be a perfect guinea pig test for this.



I'm gonna see if any of the kids has COD3 in the library.  Will come back w/ results if its there.


----------



## EarthDog (Dec 19, 2018)

John Naylor said:


> So if the the game sees 4 GB, it might allocate 2.5 GB ... if it sees 8 GB it might allocate 4.5 GB.


This varies by title and is not accurate as a blanket statement. Some exhibit this behavior, some do not. 

Ive followed up multiple times with additional questions which were not answered... multiple @JOhnNaylors... etc... I had to clear it up. Please understand I am not the only one who had follow up questions to your dissertations that were not answered. Perhaps kindly respond to the follow ups as well...maybe you have notifications off?  


There is not reason to correct a statement such as "check GPUz/MSI AB for vRAM" use. I'm not trying to split hairs between what is going on under the covers. Sure... in some titles, it will take more than it needs AT THAT MOMENT. But that doesn't mean it won't use it eventually, as was referenced earlier in this thread. In the end, we know these applications show vRAM used/allocated accurately. At least accurately enough for horseshoes and hand grenades. I think the idea of the NVIDIA statement may be lost in the words.


W1zzard said:


> Some people argue that "use" means "actively reading/writing from at a specific instant in time" (which has no meaning in this context i'd claim)


Nobdy gives a darn about what it is crunching this second. What is has to crunch later that is stored already, if it isn't there, incurs some kind of latency. So it is expected that both 'pools' of memory (currently active and ready when needed, lol) are included. The level of penalty that may or may not occur from swapping out or not having that data will vary by titles. But again, these applications give users an idea of vRAM use/allocation.


Thanks for poking your head in.


----------



## Vya Domus (Dec 19, 2018)

John Naylor said:


> It most certainly,as you said, will "depend on title and _*a*_*mount of vram on the card*".   So if the the game sees 4 GB, it might allocate 2.5 GB ... if it sees 8 GB it might allocate 4.5 GB.



Just stop with this nonsensical argument. A game does not allocate memory based on what it "sees", it uses the memory that it needs. Outside of that it may cache or not data in what is left over.


----------



## W1zzard (Dec 19, 2018)

John Naylor said:


> Clearly, in the case of Max Payne, the GPU was requesting


The VRAM requirement numbers you see in the settings for games are always estimated, and usually completely wrong



John Naylor said:


> I can't see any ambiguity in Wizzard's statement.


I don't remember what I wrote, but I wasn't intentionally vague. It's easy to figure out whether the perf diff is because of shaders or memory. If it's significantly bigger than what the other benchmarks show, then it's due to memory.



EarthDog said:


> Nobdy gives a darn about what it is crunching this second. What is has to crunch later that is stored already, if it isn't there, incurs some kind of latency. So it is expected that both 'pools' of memory (currently active and ready when needed, lol) are included. The level of penalty that may or may not occur from swapping out or not having that data will vary by titles. But again, these applications give users an idea of vRAM use/allocation.


I know what you're saying, but this "later" argument would mean that the game has to load every single asset into VRAM, every time, because it'll be used at some point in your start-to-finish playthrough



John Naylor said:


> They all report the amount of memory requested by the GPU.


Is that the actual quote? The GPU does not request anything.



> Cards will larger memory will request more memory


Same as before and completely wrong, the card has no control over what memory my application requests


----------



## EarthDog (Dec 19, 2018)

W1zzard said:


> the game has to load every single asset into VRAM, every time, because it'll be used at some point in your start-to-finish playthrough


Does it though?

Im not trying to be difficult here...but why would it have to do that?


----------



## rtwjunkie (Dec 19, 2018)

EarthDog said:


> But I've seen 4/8/11 GB cards allocate the same amount of memory in most titles... does everyone else share that sentiment?


What I have seen between 4, 6, 8 and 11 GB cards is that it is on a per title basis.  Just because a game fills 3.9GB of VRAM on a 4GB card, doesn’t mean it will fill 10.9 on an 11.  It will allocate more, for example it might reserve/use 6.5.  That tells me the game devs are programming some games to use more if available, but not necessarily the entire amount available as the VRAM size increases.

Again, my observations are not scientific, but are good enough anecdotal evidence to suit me.


----------



## W1zzard (Dec 19, 2018)

EarthDog said:


> Does it though?


Of course not



rtwjunkie said:


> It will allocate more, for example it might reserve/use 6.5.


will -> might


----------



## EarthDog (Dec 19, 2018)

W1zzard said:


> will -> might


Right. Its title dependant. 

In my experience, most stay about the same vram use/allocation regardless of GPU capacity. I wish I could recall the couple titles I saw vram 'scale' with available capacity...but it was a minority.


----------



## W1zzard (Dec 19, 2018)

EarthDog said:


> I wish I could recall the couple titles I saw vram 'scale' with available capacity...but it was a minority.


Call of Duty does that


----------



## crazyeyesreaper (Dec 19, 2018)

EarthDog said:


> vRAM. I dont recall the titles, but I've seen it before in reviews. Again, it isnt common, but happens.
> 
> The whole point here is the viability of gpuz and msi ab in reading ram use/allocation. And it looks like it is viable, contrary what has been posted a couple of times around tpu.


One of those games is the Total War series, Since around Shogun 2 or so the game would auto lower settings automatically without telling the user if the Vram limit was reached. Its why they have an unlimited vram check box.


----------



## W1zzard (Dec 19, 2018)

crazyeyesreaper said:


> would auto lower settings automatically without telling the user if the Vram limit was reached. Its why they have an unlimited vram check box.


Ah yes, I've seen that in very few games, too, but all I cared about had a way to "unlock" the settings


----------



## crazyeyesreaper (Dec 19, 2018)

Yeah in the total war games it was added after people got pissed off wondering why Ultra settings looked like shit, when in reality the game was just auto lowering in game quality settings per battle based on Vram allocation lol.


----------

