lol that does nothing. You don't seem to understand how power limits work. They aren't exe name based driver profiles. Heck, they aren't driver profiles at all.
Those are all reasons to care.
EVGAs warranty sucks less in my experience. They do tend to put effort into that image at least.
Of course, at times, it's needed, because their hardware can be hit and miss. Case in point? This thread.
Did you really just say I have no idea how power limits work?
I have a TDP modded GTX 1070, flashed with a HW programmer.
My 3090 FE is shunt modded.
Yikes man. Just yikes.
I know full well how power limits work. Hell I'm on Elmor's discord talking about this stuff with the LN2 boys quite a bit.
Maybe I'll educate you on how power limits work.
There is a TDP limit, which is total board power. When the board power gets close to this limit (about 20W or so away maybe?), it will reduce its clocks gracefully by changing the point on the V/F curve (the GPU VID, basically) to use a lower voltage and frequency step that corresponds to that voltage. Usually each frequency tier has at most 3 voltage points linked to that tier.
If the card still will get too close too the max TDP, it will continue to drop the clocks and VID until it gracefully stays below TDP.
TDP is the sum of the 8 pin power limit values in the vbios, and the PCIE Slot Power limit values. However the 8 pin limit values can be exceeded as the these values don't actually limit the 8 pins themselves.
There are also sub power rails as well. There are shunt resistors that are linked to measure all of the sub power rails.
The sub rails are not directly linked to TDP but the TDP slider can affect some values in undocumented ways. The sub rails are GPU Chip Power, MVDDC memory power, and SRC (power plane chip) power.
Each of these rails has its own power limit. There is a 'default' and a 'max' value. The default value cannot go lower than this value in triggering a power limit, but the maximum value is normalized with respect to the TDP slider itself if the slider is past 100%.
It's also worth noting that the power rails are sometimes *sums* of auxiliary rails, that are controlled by the SRC chip and regulated by other shunts. If a shunt reports values that are out of whack, this may cause a different rail to report a way too high power value, or even in some cases, 0 watts (massive under reporting) which is compensated by massive overreporting on another rail (usually because of improper shunt mods). For example, GPU Chip Power on 2x8 pin cards is a *sum* of Misc0 input Power, Misc2 Input Power and NVVDD1 input power (sum). Yes NVVDD is a sum of some other rail (this rail does not seem to be exposed in HWinfo64, much like the main NVVDD and MSVDD power rails, linked to the internal (not VID) MSVDD and NVVDD voltages, are not exposed)
The SRC power rails limit is what controls the max power draw of the individual 8 pins. While the SRC chip has its own master power limit, this is broken up into SRC1 and SRC2, or SRC3 for 3x8 pin cards, which all have their own power limit, and control what each 8 pin can draw. This is usually 150W default and 175W maximum on most cards that are not XOC Bios modded.
TDP Normalized % is the single highest power rail (Not TDP % itself) that is reported on its current value, versus its maximum allowed value, relative to all other rails.
For example, if your default memory power limit were 100W, max MVDDC power limit were 125W and your memory was drawing 150W, this would cause a Normalized TDP% draw of 150% **IF and only if no other power rail or sub power rail, including the AUX rails, exceeded 150% of their max values **, and would trigger a power limit override throttle via TDP Normalized, even if total board power were far below its TDP limit. How far a normalized power limit can exceed 100% without signaling a throttle flag depends on the vbios limits and how far the TDP slider can go to the right.
XOC Bioses often come with massively increased rail limits.
NVVDD, MSVDD and PLL have their own internal power limits that report to TDP Normalized also but are not exposed in HWinfo64. MSVDD drawing more power than is allowed for the current MSVDD voltage will cause "effective" core clocks to drop slowly, without triggering an actual powerlimit, with respect to requested clocks. NVVDD drawing more power than is allowed for the current NVVDD voltage will cause an instant power limit throttle, without effective clocks dropping first.
The Asus Strix has higher internal MSVDD and NVVDD power limits, similar to if the Kingpin cards were running with the MSVDD and NVVDD dip switches set to "on".
So yes, I know quite a bit about power limits.
How long have you been around?
Go check the 10-15 year old Rage 3D archive (I didn't read the Nvidia forums back then after I switched from a Ti 4600 to AMD for years).
There was plenty of discussion about furmark back in the day. Back when Furmark destroyed cards with any sort of substandard VRM cooling or out of spec amp limits on the phases. AMD (Ati) and Nvidia started adding app detection to limit the power draw and massively throttle the GPU core clocks *IN THE DRIVERS*. That was back when you could do stuff like use a "prerender limit" (Flip queue size) of 0 in the registry, rather than having a value of "0" turn into default, like it does now.
People were able to find out by renaming the furmark exe to Quake3.exe to restore the original power draw and clocks (on cards that were beefy enough and had good enough cooling to handle it).
This was in the windows XP days. So yeah. sometime around 15 years ago. Back then, "app detection" was done by checking the name of the executable file. People constantly renamed exe's to get huge performance boosts or to remove graphical glitches in games back then.
I had absolutely NO idea if that still worked on windows 10 or not. Clearly it doesn't. I was wrong to assume stuff that worked in XP would still work, considering how much cards are massively locked down these days. Thank you for checking.
BTW just to let you know, Furmark, at 400W TDP throttles my 3090 to *BELOW* its base clocks and 0.725v VID. GPU runs at 1,185 mhz-1200 mhz at 400W.
That's low level throttling. That's below the actual base clocks (1395 mhz), never mind boost clocks. Normal power limit throttling will never disable boost clocks like that.
At 450W TDP, I got 1550 mhz core clocks. (Clock offset was +150 mhz in both cases).