# Memory Clock Speed incorrect on Sensor tab



## Nicholas Steel (Jan 2, 2022)

with *VRAM* clocks at default the following is observed:
*1901*MHz in GPU-Z Sensor tab (incorrect value)
*2002*MHz in GPU-Z Graphics Card tab (correct value)
*1901*MHz in AIDA 64 Overclocking section
*3802*MHz in MSI Afterburner

With *VRAM* clocks increased by *202*MHz in MSI Afterburner the following is observed:
*1952*MHz in GPU-Z Sensor tab
*2103*MHz in GPU-Z Graphics Card tab*
*1952*MHz in AIDA 64 Overclocking section
*4006*MHz in MSI Afterburner

*I get the feeling MSI Afterburner is applying a theoretical 202MHz which is why only +101MHz is observed in GPU-Z's Graphics Card tab. Another words this part of GPU-Z is showing both the correct value and that increases to clock speed via MSI Afterburner are visually doubled from the actual increase being applied (MSI Afterburner seems to be showing the theoretical doubling of speed attributed to DDR).

GPU-Z 2.41.0
Nvidia Driver 472.12
Windows 10 21H2








						MSI GTX 1070 Ti GAMING Specs
					

NVIDIA GP104, 1683 MHz, 2432 Cores, 152 TMUs, 64 ROPs, 8192 MB GDDR5, 2002 MHz, 256 bit




					www.techpowerup.com


----------



## Caring1 (Jan 2, 2022)

Screen shots would help.


----------



## Nicholas Steel (Jan 2, 2022)

*Top image compilation is without an overclock. Bottom image compilation is with a 202MHz VRAM overclock.*






========== ========== ========== ==========





*202MHz VRAM overclock applied.*

Per my original post, GPU-Z's Graphics Card tab seems to be correct while everything else is wrong.

Additionally nothing seems to be able to tell if the GPU is _*currently*_ boosting. Everything reports 1607MHz for the GPU Core regardless of what activity I'm doing instead of the clock speed increasing up to 1683MHz. If I manually overclock the core via MSI Afterburner by 16MHz to make it 1630MHz, neither MSI Afterburner, GPU-Z's Sensors tab nor AIDA64's overclocking section will report the increase.

It oddly increases *both* GPU Clock and Boost values on the Graphics Card tab of GPU-Z, not sure why Boosting would be affected and I'm still unsure if the card is ever boosting:




Hmmm Special_K reports variable GPU Core clock speed with it topping out at 1.82GHz in Assassins Creed Valhalla, which seems to be around 315MHz higher than the supposed 1683MHz boost clock that the Techpowerup GPU database and GPU-Z report as being what the card would boost up to...


----------



## Nicholas Steel (Jan 5, 2022)

So... uh, I can't reproduce the issues anymore and I have no fricken clue why. I'm running the same driver version and the same version of all the monitoring apps and they all report:

Either 2002MHz, 2003MHz or 4006MHz for VRAM*
Up to 1850MHz for the GPU Core clock speed while gaming*^

* In the GPU-Z Sensor tab, the clock speed tachometer around the numerical display in MSI Afterburner and AIDA 64's Overclocking section.
^ I learned GPU Boost 3.0 lets the GPU boost beyond the advertised boost value.

Maybe a cumulative update to Windows fixed something?


----------



## Mussels (Jan 5, 2022)

Heres mine, stock VRAM with a GPU curve limited to 1600Mhz
1219 vs 1187.7
Testing with Unigine Heaven, both heaven and Afterburner will show 9501Mhz - divided by 8, that matches the second (sensors) result - while the first would math to 9752Mhz





I can add +1400 stable to my VRAM
Now it's 1394 vs 1362.8 - same 31Mhz difference, with the sensors result being more accurate


----------



## AusWolf (Jan 5, 2022)

Nicholas Steel said:


> So... uh, I can't reproduce the issues anymore and I have no fricken clue why. I'm running the same driver version and the same version of all the monitoring apps and they all report:
> 
> Either 2002MHz, 2003MHz or 4006MHz for VRAM*
> Up to 1850MHz for the GPU Core clock speed while gaming*^
> ...


Your screenshot shows a 4% load on your GPU. Some graphics cards have multiple load states for VRAM (as well as GPU). I'm guessing your particular model is one of those. As soon as the load is high enough, the driver puts both the GPU and VRAM into their highest available load state.

Also, in the nvidia Control Panel in the 3D settings menu, under the power management options, what option have you selected?


----------



## Nicholas Steel (Jan 5, 2022)

AusWolf said:


> Your screenshot shows a 4% load on your GPU. Some graphics cards have multiple load states for VRAM (as well as GPU). I'm guessing your particular model is one of those. As soon as the load is high enough, the driver puts both the GPU and VRAM into their highest available load state.
> 
> Also, in the nvidia Control Panel in the 3D settings menu, under the power management options, what option have you selected?


I was sure I was testing while a game was running, but it's plausible that I wasn't. As for the Nvidia Control Panel I had it set to Prefer Maximum Performance.

By the way, did you know the "Power Management Mode" setting in the Global Profile is constantly overriden by profiles for Desktop Window Manager (dwm.exe), Windows Explorer (explorer.exe) and Microsoft Shell Experience Host? Rendering the Global Profile's Power Management Mode setting redundant the moment you log in to your Windows account? If not, now you know why the clock speeds still decrease below stock clock speeds when idle despite setting the Global Profile to Prefer Maximum Performance (those 3 profiles are set to Adaptive by default and are _always_ in effect).


----------



## AusWolf (Jan 5, 2022)

Nicholas Steel said:


> I was sure I was testing while a game was running, but it's plausible that I wasn't.


That's it. 



Nicholas Steel said:


> As for the Nvidia Control Panel I had it set to Prefer Maximum Performance.
> 
> By the way, did you know the "Power Management Mode" setting in the Global Profile is constantly overriden by profiles for Desktop Window Manager (dwm.exe), Windows Explorer (explorer.exe) and Microsoft Shell Experience Host? Rendering the Global Profile's Power Management Mode setting redundant the moment you log in to your Windows account? If not, now you know why the clock speeds still decrease below stock clock speeds when idle despite setting the Global Profile to Prefer Maximum Performance (those 3 profiles are set to Adaptive by default and are _always_ in effect).


Why would you not want your clocks below stock load clocks at idle? What's the point of a 30-40+ W power consumption for nothing? Idle clocks are meant to decrease power consumption, hardware wear and heat. I personally prefer "Adaptive" in the nvidia settings and letting Windows do what it does.


----------



## Mussels (Jan 6, 2022)

Well guys if you look at what i posted together with his, we're seeing some sort of offset that doesnt belong there - and isn't always there, either.

@W1zzard any ideas?


----------



## AusWolf (Jan 6, 2022)

Mussels said:


> Well guys if you look at what i posted together with his, we're seeing some sort of offset that doesnt belong there - and isn't always there, either.
> 
> @W1zzard any ideas?


Your screenshot shows that the VRAM is clocked in several steps between idle and full load. I still think that the same screenshot with a constant 100% load would show the correct values, and nothing is wrong.


----------



## Mussels (Jan 6, 2022)

AusWolf said:


> Your screenshot shows that the VRAM is clocked in several steps between idle and full load. I still think that the same screenshot with a constant 100% load would show the correct values, and nothing is wrong.


It was at load - i had heaven running windowed beside it - but taking the screenshots caused the usage to dip

I could try and re-do it, but the main thing is i was seeing that flat offset between the numbers


----------



## W1zzard (Jan 6, 2022)

whats the output of nvidia-smi ?


----------



## Mussels (Jan 6, 2022)

power saving was screwing with me, the moment i clicked out of a game everything dropped
Never used nvidia smi before, heres what i got


Unsure if having two GPU-Z's open is part of the problem but it seems when you open to the main page it grabs the current clock and goes with that? 500Mhz is definitely not my base clock

Sensors tab matches the smi output once you math it out


----------



## AusWolf (Jan 6, 2022)

Mussels said:


> power saving was screwing with me, the moment i clicked out of a game everything dropped
> Never used nvidia smi before, heres what i got
> 
> 
> ...


Strange. Here's what my card looks like during a Superposition run (main window screenshot taken at idle):






Also, I've noticed that both yours and OP's card run at PCI-e x8. Why is that? Your base/boost clock detection seems to be off too.


----------



## W1zzard (Jan 6, 2022)

Mussels said:


> Unsure if having two GPU-Z's open is part of the problem but it seems when you open to the main page it grabs the current clock and goes with that? 500Mhz is definitely not my base clock


This is an NVIDIA driver bug on older GPUs, but should go away after a couple of refreshes (the 1st tab refreshes things periodically)



Mussels said:


> Sensors tab matches the smi output once you math it out


good, so gpuz seems to be correct


----------



## Mussels (Jan 6, 2022)

W1zzard said:


> This is an NVIDIA *driver bug on older GPUs*, but should go away after a couple of refreshes (the 1st tab refreshes things periodically)


I'm on an RTX3090 with the latest drivers 
(The moment the 3090Ti is announced, BAM it's old now...)


Shouldnt the front page show the 'max' clocks and the sensors tab show the current/possibly lower ones? cause the problem i'm seeing is that the main page is reporting lower, which is what throws me (and others) off


----------



## AusWolf (Jan 7, 2022)

Mussels said:


> I'm on an RTX3090 with the latest drivers
> (The moment the 3090Ti is announced, BAM it's old now...)


I think he meant an old driver bug that still hasn't been fixed.  



Mussels said:


> Shouldnt the front page show the 'max' clocks and the sensors tab show the current/possibly lower ones? cause the problem i'm seeing is that the main page is reporting lower, which is what throws me (and others) off


That, and also that you're on PCI-e x8 for some reason.


----------



## W1zzard (Jan 7, 2022)

Mussels said:


> I'm on an RTX3090 with the latest drivers





AusWolf said:


> I think he meant an old driver bug that still hasn't been fixed.


Oh I thought you had the 1070 Ti from the start of the thread. This difference on RTX 3090 is strange indeed, and first time I'm hearing about it. Can you check a few older drivers?


----------



## Nicholas Steel (Jan 7, 2022)

AusWolf said:


> Also, I've noticed that both yours and OP's card run at PCI-e x8. Why is that? Your base/boost clock detection seems to be off too.


For me, laziness. My previous computer involved an original Asus P6T motherboard with an Intel i7 920 CPU. That motherboard featured a Multiplexor which meant that either of the 2 slots closest to the CPU would operate at x16 speed so long as the other one was unoccupied. I did not realize that multiplexors had fallen out of fashion when assembling my current computer and assumed it would also operate in x16 mode in the 2nd slot closest to the CPU but that is not the case and I've been too lazy to move it (plus the extra distance from the CPU helps keep the CPU cooler).

There's various reviews showing the performance difference of PCI-E 3.0 x16 and x8 is marginal at best with a higher tier video card than the 1070Ti.


----------



## AusWolf (Jan 8, 2022)

Nicholas Steel said:


> For me, laziness. My previous computer involved an original Asus P6T motherboard with an Intel i7 920 CPU. That motherboard featured a Multiplexor which meant that either of the 2 slots closest to the CPU would operate at x16 speed so long as the other one was unoccupied. I did not realize that multiplexors had fallen out of fashion when assembling my current computer and assumed it would also operate in x16 mode in the 2nd slot closest to the CPU but that is not the case and I've been too lazy to move it (plus the extra distance from the CPU helps keep the CPU cooler).
> 
> There's various reviews showing the performance difference of PCI-E 3.0 x16 and x8 is marginal at best with a higher tier video card than the 1070Ti.


I see. I'm wondering whether that could be a reason for your weird clocks. My mind says it's doubtful, but it may be worth a try relocating your card into the first slot, nevertheless.


----------



## Mussels (Jan 9, 2022)

AusWolf said:


> I think he meant an old driver bug that still hasn't been fixed.
> 
> 
> That, and also that you're on PCI-e x8 for some reason.


My WD PCI-E SSD was in the second GPU slot, that was expected


----------

