Monday, September 17th 2018
TechPowerUp GPU-Z v2.11.0 Released
TechPowerUp today released the latest version of TechPowerUp GPU-Z, the popular graphics subsystem information and diagnostics utility. Version 2.11.0 introduces support for NVIDIA GeForce RTX 20-series "Turing" graphics cards, including the RTX 2080 Ti, RTX 2080, and RTX 2070. Support is also added for a few exotic OEM variants we discovered over the months, including GTX 750 Ti (GM107-A), GTX 1050 Ti Mobile 4 GB, Quadro P1000, Tesla P100 DGXS, GeForce 9200. From the AMD stable, we add support for "Vega 20," "Fenghuang" semi-custom SoC for Zhongshan Subor, Ryzen 5 Pro 2500U, 5 Pro 2400G, 3 Pro 2200G, 3 Pro 2300U, 3 2200GE, Athlon 200GE, and Embedded V1807B. Intel UHD 610, UHD P630 (Xeon), Coffee Lake GT3e (i5-8259U), are now supported.
Among the new features are system RAM usage sensors, temperature monitoring offsets for AMD Ryzen Threadripper 2000 series processors, and the ability to identify USB-C display output, GDDR6 memory standard, and 16 Gbit density memory chips. Several under-the-hood improvements were made, including WDDM-based memory monitoring for AMD GPUs, replacing ADL sensors that tend to be buggy. GPU-Z also cleans up QueryExternal files from your Temp folder. Grab GPU-Z from the link below.DOWNLOAD: TechPowerUp GPU-Z v2.11.0
The change-log follows.
Among the new features are system RAM usage sensors, temperature monitoring offsets for AMD Ryzen Threadripper 2000 series processors, and the ability to identify USB-C display output, GDDR6 memory standard, and 16 Gbit density memory chips. Several under-the-hood improvements were made, including WDDM-based memory monitoring for AMD GPUs, replacing ADL sensors that tend to be buggy. GPU-Z also cleans up QueryExternal files from your Temp folder. Grab GPU-Z from the link below.DOWNLOAD: TechPowerUp GPU-Z v2.11.0
The change-log follows.
- Added NVIDIA GeForce RTX Turing support
- Added option to minimize GPU-Z on close
- Added system RAM memory usage sensor
- Added temperature monitoring offset for Threadripper 2nd gen
- Fixed typo in NVIDIA Perf Cap Reason tooltip
- GPU-Z will no longer use AMD ADL memory sensors because they are buggy, WDDM monitoring used again
- GPU Lookup feature improved by taking boost clock into account
- Added ability to clean up old QueryExternal files in temp directory
- Added support to BIOS parser for USB-C output, GDDR6 memory, 16 Gbit memory chips
- Added support for NVIDIA RTX 2080 Ti, RTX 2080, RTX 2070, GTX 750 Ti (GM107-A), GTX 1050 Ti Mobile 4 GB, Quadro P1000, Tesla P100 DGXS, GeForce 9200
- Added support for AMD Vega 20, Fenghuang, Ryzen 5 Pro 2500U, 5 Pro 2400G, 3 Pro 2200G, 3 Pro 2300U, 3 2200GE, Athlon 200GE, Embedded V1807B
- Added support for Intel UHD 610, UHD P630 (Xeon), Coffee Lake GT3e (i5-8259U)
23 Comments on TechPowerUp GPU-Z v2.11.0 Released
Will GPUz eventually have fields for the RT and Tensor core counts for NVIDIA GPUs?
How much does the Tensor cores matter, should we add FP16, FP32 speeds capabilities ?
I think we really should wait for stuff to really catch on and be things we care about instead of things hyped up before adding as I really like the lean and easy application rather than having info on everything in my face which I really don't care about.
Edit: I love the application, keep the good work up.
Seeing as how it is hardware and part of the specs of the cards, I would like to see it listed there somewhere for sure. I am only looking for a count, not a performance metric/benchmark as you have described or waiting for the software to catch up to the hardware. The relevance of the technology doesn't play a role in my mind here. These cards have new hardware and GPUz is a multi-function utility that lists hardware specifications... it makes sense (to me) it should be there in some form and not ignored. Those that have the cards would care about seeing it. I can understand not putting it on the first page though.
Cheers... back to my hole. :)
/snicker :p
used Lookup and saw this
both say 8192 MB
Lookup doesn't submit memory clock, which is why it can't tell those two cards apart
The card that has been added is a workstation card
EDIT: The former DOESN'T match the NVIDIA whitepapers... what am I missing here? :)
EDIT2: The Whitepaper (fresh DL today just in case I missed an update) shows the 2080 with 314.6 / 331.2. GPUz shows it as 278.x I have attached a pic with both running and neither matching the NV Whitepaper.
EDIT3: Looks like 2.10 has 245 TMUs listed and 2.11 is reporting the correct # of TMUs for the 2080... but the GT/s is still off compared to the NV whitepaper specs.
EDIT4: Yes, aware 2.10 didn't support these GPUs at the time, but it seems 2.11 is reporting GT/s incorrectly or the NV WP are incorrect.
EDIT: I also just noticed Memory bandwidth isn't reporting correctly either.
Any help would be appreciated. :)
Interesting...considering running the base boost clock is a given in all but the most difficult environments - the cards almost never run base clocks unless its throttling heavily or some knuckle nuts is running Furmark against it (and its still over base boost there).
Did NV change the way they did it recently or did GPUz always use the base clock after boost was implemented? I don't recall those values being different in the past (but never paid too much attention to it). Why go against NVIDIA's papers seeing as how it takes an exception to not run base boost clocks...?
EDIT: I personally feel it should match what NVIDIA says unless its an unrealistic situation and users never hit the base boost clocks.
GPU-Z v2.11 isn't allowing me to dump the bios of my FE 2080TI. It says it's not supported. Is it because nvidia changed something on release cards, or am I using the wrong version?
edit: I just look at my bios version in GPU-Z and it's the same version listed in the database. So I guess there weren't any changes there... but for whatever reason, it says dumping my bios isn't supported by my device. Would drivers prevent it?
Did NV change the way they did it recently or did GPUz always use the base clock after boost was implemented? I don't recall those values being different in the past (but never paid too much attention to it). Why go against NVIDIA's papers seeing as how it takes an exception to not run base boost clocks...?
I still don't understand why the choice was made to use the base clock considering its never hit and goes against NV whitepapers...shouldn't those align unless its a dubious value?
EDIT: Disregard here. Two threads on similar things... moving to the other thread. :)