Monday, September 17th 2018

TechPowerUp GPU-Z v2.11.0 Released

TechPowerUp today released the latest version of TechPowerUp GPU-Z, the popular graphics subsystem information and diagnostics utility. Version 2.11.0 introduces support for NVIDIA GeForce RTX 20-series "Turing" graphics cards, including the RTX 2080 Ti, RTX 2080, and RTX 2070. Support is also added for a few exotic OEM variants we discovered over the months, including GTX 750 Ti (GM107-A), GTX 1050 Ti Mobile 4 GB, Quadro P1000, Tesla P100 DGXS, GeForce 9200. From the AMD stable, we add support for "Vega 20," "Fenghuang" semi-custom SoC for Zhongshan Subor, Ryzen 5 Pro 2500U, 5 Pro 2400G, 3 Pro 2200G, 3 Pro 2300U, 3 2200GE, Athlon 200GE, and Embedded V1807B. Intel UHD 610, UHD P630 (Xeon), Coffee Lake GT3e (i5-8259U), are now supported.

Among the new features are system RAM usage sensors, temperature monitoring offsets for AMD Ryzen Threadripper 2000 series processors, and the ability to identify USB-C display output, GDDR6 memory standard, and 16 Gbit density memory chips. Several under-the-hood improvements were made, including WDDM-based memory monitoring for AMD GPUs, replacing ADL sensors that tend to be buggy. GPU-Z also cleans up QueryExternal files from your Temp folder. Grab GPU-Z from the link below.
DOWNLOAD: TechPowerUp GPU-Z v2.11.0

The change-log follows.

  • Added NVIDIA GeForce RTX Turing support
  • Added option to minimize GPU-Z on close
  • Added system RAM memory usage sensor
  • Added temperature monitoring offset for Threadripper 2nd gen
  • Fixed typo in NVIDIA Perf Cap Reason tooltip
  • GPU-Z will no longer use AMD ADL memory sensors because they are buggy, WDDM monitoring used again
  • GPU Lookup feature improved by taking boost clock into account
  • Added ability to clean up old QueryExternal files in temp directory
  • Added support to BIOS parser for USB-C output, GDDR6 memory, 16 Gbit memory chips
  • Added support for NVIDIA RTX 2080 Ti, RTX 2080, RTX 2070, GTX 750 Ti (GM107-A), GTX 1050 Ti Mobile 4 GB, Quadro P1000, Tesla P100 DGXS, GeForce 9200
  • Added support for AMD Vega 20, Fenghuang, Ryzen 5 Pro 2500U, 5 Pro 2400G, 3 Pro 2200G, 3 Pro 2300U, 3 2200GE, Athlon 200GE, Embedded V1807B
  • Added support for Intel UHD 610, UHD P630 (Xeon), Coffee Lake GT3e (i5-8259U)
Add your own comment

23 Comments on TechPowerUp GPU-Z v2.11.0 Released

#1
EarthDog
Thanks for keeping his application updated. :)

Will GPUz eventually have fields for the RT and Tensor core counts for NVIDIA GPUs?
Posted on Reply
#2
Imsochobo
EarthDogThanks for keeping his application updated. :)

Will GPUz eventually have fields for the RT and Tensor core counts for NVIDIA GPUs?
I'd say 100% wait with RT stuff as AMD gpu's can do raytracing too and we do not know how this stacks up, how it should be measured, how important it is and by the demo's i've seen the RayTracing is really a joke so far (from both camps)
How much does the Tensor cores matter, should we add FP16, FP32 speeds capabilities ?

I think we really should wait for stuff to really catch on and be things we care about instead of things hyped up before adding as I really like the lean and easy application rather than having info on everything in my face which I really don't care about.

Edit: I love the application, keep the good work up.
Posted on Reply
#3
W1zzard
EarthDogThanks for keeping his application updated. :)

Will GPUz eventually have fields for the RT and Tensor core counts for NVIDIA GPUs?
I don't know yet. It could be something for the Advanced tab until more GPUs support it
Posted on Reply
#4
DRDNA
Working well on my laptop with GTX1070 and 7700HQ with windows 10 ...Thank you and nice job!
Posted on Reply
#5
EarthDog
W1zzardI don't know yet. It could be something for the Advanced tab until more GPUs support it
Cool.

Seeing as how it is hardware and part of the specs of the cards, I would like to see it listed there somewhere for sure.
I'd say 100% wait with RT stuff as AMD gpu's can do raytracing too and we do not know how this stacks up, how it should be measured, how important it is and by the demo's i've seen the RayTracing is really a joke so far (from both camps)
How much does the Tensor cores matter, should we add FP16, FP32 speeds capabilities ?

I think we really should wait for stuff to really catch on and be things we care about instead of things hyped up before adding as I really like the lean and easy application rather than having info on everything in my face which I really don't care about.
I am only looking for a count, not a performance metric/benchmark as you have described or waiting for the software to catch up to the hardware. The relevance of the technology doesn't play a role in my mind here. These cards have new hardware and GPUz is a multi-function utility that lists hardware specifications... it makes sense (to me) it should be there in some form and not ignored. Those that have the cards would care about seeing it. I can understand not putting it on the first page though.


Cheers... back to my hole. :)
Posted on Reply
#6
DeathtoGnomes
So where is the Press Release tag?

/snicker :p

used Lookup and saw this

both say 8192 MB
Posted on Reply
#7
W1zzard
Yeah, the second one is 11 Gbps memory _clock_

Lookup doesn't submit memory clock, which is why it can't tell those two cards apart
Posted on Reply
#8
HD64G
Thanks @W1zzard! Nice update! Any info possible to leat out about Vega 20? Wondering now if it is coming for consumers in late 2018 after you included it in GPUz...
Posted on Reply
#9
W1zzard
HD64GThanks @W1zzard! Nice update! Any info possible to leat out about Vega 20? Wondering now if it is coming for consumers in late 2018 after you included it in GPUz...
It has been said many times. There is no Vega 20 for consumers this year, and I doubt ever.

The card that has been added is a workstation card
Posted on Reply
#10
librin.so.1
W1zzardI don't know yet. It could be something for the Advanced tab until more GPUs support it
How about showing flops (for the most common 32bit float operations, I guess), along with the pixel and texture fillrate? This been applicable to GPUs for pretty much a decade now, so I'm continuously confused why it hasn't been added yet.
Posted on Reply
#11
irazer
Can i save bios of rtx 2080 8gb? Found FE bios at tpu database and i'm curious. :)
Posted on Reply
#12
EarthDog
@W1zzard - Quick question on the texture fill rates on RTX 2080... The FE 2080 shows 371.2 GT/s texture fill rate in 2.10, but 2.11 (with a MSI RTX 2080 Gaming X Trio) shows 278.8 GT/s (with a slightly higher boost clock - all other clocks remain the same). I am pretty sure the former (higher values in 2.10) matches the specs from the review whitepapers... this is with 411.70 driver (and 411.63).

EDIT: The former DOESN'T match the NVIDIA whitepapers... what am I missing here? :)

EDIT2: The Whitepaper (fresh DL today just in case I missed an update) shows the 2080 with 314.6 / 331.2. GPUz shows it as 278.x I have attached a pic with both running and neither matching the NV Whitepaper.

EDIT3: Looks like 2.10 has 245 TMUs listed and 2.11 is reporting the correct # of TMUs for the 2080... but the GT/s is still off compared to the NV whitepaper specs.

EDIT4: Yes, aware 2.10 didn't support these GPUs at the time, but it seems 2.11 is reporting GT/s incorrectly or the NV WP are incorrect.

EDIT: I also just noticed Memory bandwidth isn't reporting correctly either.

Any help would be appreciated. :)
Posted on Reply
#13
W1zzard
EarthDogor the NV WP are incorrect.
NVIDIA uses Boost Clock for their calculation, GPU-Z uses Base Clock
Posted on Reply
#14
EarthDog
Thank you for the reply. :)

Interesting...considering running the base boost clock is a given in all but the most difficult environments - the cards almost never run base clocks unless its throttling heavily or some knuckle nuts is running Furmark against it (and its still over base boost there).

Did NV change the way they did it recently or did GPUz always use the base clock after boost was implemented? I don't recall those values being different in the past (but never paid too much attention to it). Why go against NVIDIA's papers seeing as how it takes an exception to not run base boost clocks...?

EDIT: I personally feel it should match what NVIDIA says unless its an unrealistic situation and users never hit the base boost clocks.
Posted on Reply
#15
Diverge
@W1zzard

GPU-Z v2.11 isn't allowing me to dump the bios of my FE 2080TI. It says it's not supported. Is it because nvidia changed something on release cards, or am I using the wrong version?

edit: I just look at my bios version in GPU-Z and it's the same version listed in the database. So I guess there weren't any changes there... but for whatever reason, it says dumping my bios isn't supported by my device. Would drivers prevent it?
Posted on Reply
#16
W1zzard
Diverge@W1zzard

GPU-Z v2.11 isn't allowing me to dump the bios of my FE 2080TI. It says it's not supported. Is it because nvidia changed something on release cards, or am I using the wrong version?

edit: I just look at my bios version in GPU-Z and it's the same version listed in the database. So I guess there weren't any changes there... but for whatever reason, it says dumping my bios isn't supported by my device. Would drivers prevent it?
This version of GPU-Z doesn't have BIOS saving support for Turing yet, a newer build with that functionality is available in our test builds forum. we'll also release a new version of GPU-Z very soon
Posted on Reply
#17
EarthDog
W1z ,

Did NV change the way they did it recently or did GPUz always use the base clock after boost was implemented? I don't recall those values being different in the past (but never paid too much attention to it). Why go against NVIDIA's papers seeing as how it takes an exception to not run base boost clocks...?
Posted on Reply
#18
W1zzard
EarthDogW1z ,

Did NV change the way they did it recently or did GPUz always use the base clock after boost was implemented? I don't recall those values being different in the past (but never paid too much attention to it). Why go against NVIDIA's papers seeing as how it takes an exception to not run base boost clocks...?
GPU-Z has always used base clock
Posted on Reply
#19
EarthDog
So, I guess you are saying NVIDIA changed they way they do things now basing it off their boost clock?

I still don't understand why the choice was made to use the base clock considering its never hit and goes against NV whitepapers...shouldn't those align unless its a dubious value?

EDIT: Disregard here. Two threads on similar things... moving to the other thread. :)
Posted on Reply
#20
R0H1T
Have a bunch of dumps to submit for the latest version 2.12.0 but where?
Posted on Reply
#21
W1zzard
R0H1THave a bunch of dumps to submit for the latest version 2.12.0 but where?
What dumps?
Posted on Reply
#22
R0H1T
W1zzardWhat dumps?
Crash dumps, assuming they're useful?
Posted on Reply
#23
W1zzard
R0H1TCrash dumps, assuming they're useful?
When does it crash? Does the previous version crash too?
Posted on Reply
Add your own comment
Dec 22nd, 2024 10:49 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts