• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

[GPU-z] Power reading name fix needed : GPU Chip Power on NV vs. GPU Chip Power on AMD

Joined
May 8, 2016
Messages
1,919 (0.61/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
There is disconnect between what GPU-z shows as "GPU Chip Power Draw" on Nvidia and AMD GPUs.
Examples:
1) RX 6700 XT
TGP.png

2) RTX 3070 Ti :
Board power.png

In short : "GPU Chip Power Draw" on AMD means [TGP] (or maximum TDP allowed for GPU), while on Nvidia it means "GPU only".
To get proper comparison between both power draws on AMD and NV, under GPU-z - you have to compare AMD's "GPU Chip Power Draw", to Nvidia's "Board Power Draw" - which is counter intuitive from naming sense, and confusing.
I would like to point out HWInfo64 does show [GPU ASIC Power], [TGP] and [GPU PPT] as the same value as GPU-z "GPU Chip Power", while [GPU Core Power] under HWInfo64 is more inline with what I think "GPU Chip Power" on Nvidia cards under GPU-z is.
Thank you for any type of fix for this (easy name change, or changing sensor for "GPU Chip Power Draw" data and add proper one for [TGP] separately ?).
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,963 (3.72/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
To get proper comparison between
You simply cannot compare them. AMD uses a formula to estimate a value with unknown mechanics. NVIDIA actually measures the real current and voltage using dedicated circuitry

People don't understand "ASIC", they barely understand that "GPU" does not mean "the card", so I added "chip". I have about 50 threads were people where confused in the past, this stopped when I renamed the sensor to "GPU Chip Power"
 
Joined
May 8, 2016
Messages
1,919 (0.61/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
If that's the case, how come AMD cards know if they are over or under spec with power (if it's THAT inaccurate vs. NV one) ?
I mean, there is no point in power limiting a card up to a single digit watts, if card can't tell if it's pulling a 280W or 180W from 12V rail.

From my point of view : Both NV and AMD cards have a set power limit, which is monitored by at least one sensor and it has power draw value that will be higher than any other power draw value from ANY other sensor build into the card.
THAT highest value, is what get's power limited and decides TDP of said card.
I find it odd that such sensor on NV and AMD are named differently, but if AMD needs to be "GPU Chip Power Draw" - fine.

Here's a zoomed-in mixed screenshot from GPU-z sensor tab from my RTX 3070 Ti (top left), and RX 6700 XT (bottom left).
I added HWInfo64 for RX 6700 XT on the right (to show other sensors Radeon card has) :
AMDvNV.png

Switch around "Board power" and "GPU Chip Power Draw" on NV card, and that also fixes naming inconsistency problem.

Still, why GPU-z program can't just name it as "TDP sensor" or "Power" (for all NV, AMD, and Intel cards), and just "don't care" how accurate it is ?
I think, it's up to user to decide what is the correct/true/useful to him or her, and only he or she should decide what to do with data provided.
Shouldn't having more ways to tell different power draws (regardless of accuracy), be more important from a diagnostic tool standpoint ?

Unless, you are correct and AMD plainly lies, and there is huge difference between what build-in sensors show, and what card actually pulls from all outside sources (PCIe + 8-pin/6-pins). At which point, why bother with anything ?
Just provide tip for Steve from GN/HUB or ElmorLabs. I bet they gladly validate AMD's TDP sensor claims vs. reality, and check for you if card's sensors really are that inaccurate (to the point of them not being comparable between NV and AMD cards).
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,963 (3.72/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
know if they are over or under spec with power (if it's THAT inaccurate vs. NV one) ?
That's exactly what AMD invented the algorithm for, to estimate when the card goes above a theoretical power limit, without spending money on actual sensor circuitry. It was never designed to report "x W" of power with serious accuracy. NVIDIA has been using measurement circuits on all but their cheapest card for a decade+, this adds like $5 to the board cost.

Unless, you are correct and AMD plainly lies, and there is huge difference between what build-in sensors show, and what card actually pulls from all outside sources (PCIe + 8-pin/6-pins). At which point, why bother with anything ?
Just provide tip for Steve from GN/HUB or ElmorLabs. I bet they gladly validate AMD's TDP sensor claims vs. reality, and check for you if card's sensors really are that inaccurate (to the point of them not being comparable between NV and AMD cards).
It is actually common knowledge. Not sure why I would ask someone with worse equipment and test methods to validate something that I've personally seen in every RX 6000 review? Also just look at the PCB. AMD RX 7000 has proper measurement circuitry on some models
 
Joined
Nov 18, 2010
Messages
7,595 (1.48/day)
Location
Rīga, Latvia
System Name HELLSTAR
Processor AMD RYZEN 9 5950X
Motherboard ASUS Strix X570-E
Cooling 2x 360 + 280 rads. 3x Gentle Typhoons, 3x Phanteks T30, 2x TT T140 . EK-Quantum Momentum Monoblock.
Memory 4x8GB G.SKILL Trident Z RGB F4-4133C19D-16GTZR 14-16-12-30-44
Video Card(s) Sapphire Pulse RX 7900XTX. Water block. Crossflashed.
Storage Optane 900P[Fedora] + WD BLACK SN850X 4TB + 750 EVO 500GB + 1TB 980PRO+SN560 1TB(W11)
Display(s) Philips PHL BDM3270 + Acer XV242Y
Case Lian Li O11 Dynamic EVO
Audio Device(s) SMSL RAW-MDA1 DAC
Power Supply Fractal Design Newton R3 1000W
Mouse Razer Basilisk
Keyboard Razer BlackWidow V3 - Yellow Switch
Software FEDORA 41
To measure TDP I can suggest the power meter adapter Roman sells to have piece of mind, Wireview?... I don't know the polling rate of the device and how it calculates the average and pushes it to the screen thou. It is a matter of taste. He dropped the ball making it with optional Bluetooth module, using simple apps with known protocol that can log and output graphs... but he went only for the bling. The added costs would be minimal, but it would make it into usefull debug tool... like searching real UV gains... you would have a normal plot and then calculate it. Actually it would be a nice GPU-Z idea, add a time based consumption tab, that does measure consumed energy in a time frame, then make certified GPU-Z devices and that pair with it :D. Money BOSS.
 
Joined
May 8, 2016
Messages
1,919 (0.61/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
It is actually common knowledge. Not sure why I would ask someone with worse equipment and test methods to validate something that I've personally seen in every RX 6000 review? Also just look at the PCB. AMD RX 7000 has proper measurement circuitry on some models
OK, then don't show power reading of AMD cards the same way as Nvidia's, if You think they are inaccurate to that level.
I did a W/FPS graphs based on values provided by GPU-z, and now you are telling me it's inaccurate on AMD to the point of ALL data GPU-z for it is irrelevant/wrong and cannot be compared to Nvidia.

I don't want anyone else to assume it is the same (as I did), when author of GPU-z says it is not.
Thank you for clarification.

To measure TDP I can suggest the power meter adapter Roman sells to have piece of mind
Not everyone has the time and energy to power meter all the cards, when the same data could be pulled from GPU-z or HWInfo64 in a way simpler (and cheaper manner). It sucks that data isn't accurate though (for AMD cards).
 
Last edited:

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,963 (3.72/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
to power meter all the cards, when the same data could be pulled from GPU-z or HWInfo64 in a way simpler (and cheaper manner)
That is simply not true. On NVIDIA (and RDNA 3 with board power measurement circuits), I'd say this is a reasonable compromise, but no for older Radeon cards

OK, then don't show power reading of AMD cards the same way as Nvidia's
That is what I'm doing. That's why the naming of the sensors is "Board Power Draw" (NVIDIA) vs "GPU Chip Power Draw" (AMD). Only AMD cards without power measurement hardware use "GPU Chip Power Draw". The tooltip text will be improved with GPU-Z v3, which has a big UI rewrite
 
Joined
May 8, 2016
Messages
1,919 (0.61/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
I meant not showing ANY power draw numbers for AMD cards... as in, not even %.
In that case, I will just point from my FPS/W to here and why AMD numbers can't be trusted.
Thank you.
 
Last edited:
Top