• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RTX 3080 Crash to Desktop Problems Likely Connected to AIB-Designed Capacitor Choice

Joined
Dec 24, 2008
Messages
2,062 (0.35/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
Right, like drivers that need further refinement.

I am simply wonder of how NVIDIA will prioritize drivers that need further refinement?

Few months ago the GTS 1660 Super this were demonstrated as fresh product option which as fresh one, the drivers they should gain greater performance and compatibility.
Now the RTX 3000 issue this will change NVIDIA's drivers developers focus at this direction.
In simple English thousands of people expectations they get on hold.
 
Joined
Jun 13, 2020
Messages
12 (0.01/day)
Well, Asus went the whole hog and implemented 6 MLCCs in their design, which simply suggests Nvidia partners *knew* about possible weaknesses in this area...
What is the only logical conclusion here? I leave it to you...
 
Joined
Sep 27, 2020
Messages
80 (0.05/day)
Well, Asus went the whole hog and implemented 6 MLCCs in their design, which simply suggests Nvidia partners *knew* about possible weaknesses in this area...
What is the only logical conclusion here? I leave it to you...
Asus TUF and FE also have problems...


I hope that tomorrow Nvidia relases an statement of this issue and add some information about it, because today all is rumour and noise... if FE are having same problems, I want to doubt that this is a HW problem...
 
Joined
Apr 14, 2017
Messages
25 (0.01/day)
Asus TUF and FE also have problems...


I hope that tomorrow Nvidia relases an statement of this issue and add some information about it, because today all is rumour and noise... if FE are having same problems, I want to doubt that this is a HW problem...

A souce to give us?
 
Joined
May 15, 2020
Messages
697 (0.41/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
A souce to give us?

It is not. I'm actually attributing it to Samsung. I think you would remember ChipGate?
Whatever it is, it seems to be related to the quality of the node. Whether downclocking or overvolting will be the result, it seems to come from the fact that Nvidia is asking more from the silicon than it is capable of giving. The correction will most likely result in either somewhat increased TDP or somewhat decreased performance, or a bit of both. There will be no recall for this, because the clock speeds at which these problems arise are way above what is advertised on the box.
 
Joined
Sep 27, 2020
Messages
7 (0.00/day)


Whatever it is, it seems to be related to the quality of the node. Whether downclocking or overvolting will be the result, it seems to come from the fact that Nvidia is asking more from the silicon than it is capable of giving. The correction will most likely result in either somewhat increased TDP or somewhat decreased performance, or a bit of both. There will be no recall for this, because the clock speeds at which these problems arise are way above what is advertised on the box.

I agree with you. They are only required to give what the box promises and gpu boost is designed to just make your purchase more valuable. So there solution will surely be a down clock. It looks like someone tested doing his own overclock and he was still able to get an increase of 4 to 6 fps in games without crashing. They probably did juice these cards from the start to make the generational leap as large as it is.
 
Joined
Sep 27, 2020
Messages
2 (0.00/day)
Why are you guys complaining? You chose to be a beta test for Nvidia. Even when you download a new driver there is a box you can check if you wish to send them data about your crashes. Of course it was a rush to market like any other manufacturer of any product to increase their stock price and demand to satisfy the board of their shareholders. We saw the result of that in crashes. They didn’t do enough alpha and beta test. The story repeats itself.
 
Joined
May 15, 2020
Messages
697 (0.41/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,668 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
Why are you guys complaining? You chose to be a beta test for Nvidia. Even when you download a new driver there is a box you can check if you wish to send them data about your crashes. Of course it was a rush to market like any other manufacturer of any product to increase their stock price and demand to satisfy the board of their shareholders. We saw the result of that in crashes. They didn’t do enough alpha and beta test. The story repeats itself.

GPUs have been released without these issues for a very long time. Sometimes there have been problems (remember the 8800GT, even though those problems didn't arise when they were new) but in general GPUs have been pretty solid things, as they should be. Electronics in power delivery is a solved problem.

Nobody signed to be a beta tester, the advertising was "It just works!"

Was it though? Because that would be silly. It's a finished product. If it doesn't work it's defective, saying it works is like saying it isn't defective.
 
Joined
May 15, 2020
Messages
697 (0.41/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Was it though? Because that would be silly. It's a finished product. If it doesn't work it's defective, saying it works is like saying it isn't defective.
Hey, you don't like the jingle, take it with Jensen :p.
 
Joined
Nov 21, 2010
Messages
2,355 (0.46/day)
Location
Right where I want to be
System Name Miami
Processor Ryzen 3800X
Motherboard Asus Crosshair VII Formula
Cooling Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover
Memory F4-3600C16Q-32GTZNC
Video Card(s) XFX 6900 XT Speedster 0
Storage 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD
Display(s) DELL AW3420DW / HP ZR24w
Case Lian Li O11 Dynamic XL
Audio Device(s) EVGA Nu Audio
Power Supply Seasonic Prime Gold 1000W+750W
Mouse Corsair Scimitar/Glorious Model O-
Keyboard Corsair K95 Platinum
Software Windows 10 Pro
Well, Asus went the whole hog and implemented 6 MLCCs in their design, which simply suggests Nvidia partners *knew* about possible weaknesses in this area...
What is the only logical conclusion here? I leave it to you...

From the little info that was leaked the partners they get a reference design with recommended and minimum specs from nvidia they tweak that according to their design goals then make the card and test it some found out they were having issues but at this point it was past the point of no return. Some band-aid implemented fixes (zotac) and others shipped as-is or were unaware. Since they seem to be under a gag order there is no telling who knew what.
 
Joined
Sep 27, 2020
Messages
7 (0.00/day)
Why are you guys complaining? You chose to be a beta test for Nvidia. Even when you download a new driver there is a box you can check if you wish to send them data about your crashes. Of course it was a rush to market like any other manufacturer of any product to increase their stock price and demand to satisfy the board of their shareholders. We saw the result of that in crashes. They didn’t do enough alpha and beta test. The story repeats itself.

If anything turing was the beta test for this whole thing. And it didn't have problems. It just wasn't good enough. If a new architecture makes you a beta tester for it then we have been beta testers for the pass 6 years at least from as far back as I have been watching these cards with Nvidia. When are we going to be out of beta?
 
Joined
Jun 13, 2020
Messages
12 (0.01/day)
Asus TUF and FE also have problems...


I hope that tomorrow Nvidia relases an statement of this issue and add some information about it, because today all is rumour and noise... if FE are having same problems, I want to doubt that this is a HW problem...

Ok then, I just misread or misunderstood something... If that is true, then this might only be a part of the problem, and a big embarassment for card manufacturers!
Did they push the silicon too far from the start? Then, they will have to release updated firmwares with some downclock or something like that...
Sticking with my 1080 for now, I'll wait till the 20 series sells dirt cheap on eBay before pulling the trigger!
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
I get the feeling AMD could help out Nvidia a little bit. Though sense m.i. is proprietary, it does work to curtail power. Threadripper exists for the sole reason that PBO is working as intended. Sudden spikes are very damaging to the customer base, as noted here.
 

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,800 (1.64/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Thermalright Phantom Spirit SE
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage Nextorage NE1N 2TB ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard SteelSeries Apex 7
Software Windows 11 +startisallback
People are having a hard time grasping how many corners AIBS cut with the boards these days

basically every corner they cut they can and its just enough to push the stability envelope past the limit when you are trying to hit that magical 2Ghz marketing number

ever since Nvidia started making there own boards the AIBS have been cutting every corner possible if you want a reliable card, you buy the Nvidia made one(unless you wanna spend the money for the absolute top tier cards like a Strix, Hall of Fame, k1ngpin)

this is a complete reversal from how it used to be

it used to be AIB cards offered more bang for the buck with better overclocking and better cooling this is frankly no longer the case and the short of it is unless Nvidia relaxes some of the restrictions AIBS only continued reason to exist is to make cards for Nvidia
 
Joined
Dec 24, 2008
Messages
2,062 (0.35/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
I get the feeling AMD could help out Nvidia a little bit. Though sense m.i. is proprietary, it does work to curtail power. Threadripper exists for the sole reason that PBO is working as intended. Sudden spikes are very damaging to the customer base, as noted here.

I do prefer this to simply not happen, AMD it should focus their power at their own products which them wasting more on-board card memory than what an NVIDIA card this using at the same game and at the same resolution.

We are not shooting here NVIDIA's legs, we simply trying to get a bit of encyclopedia understanding of what when wrong.
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
I do prefer this to simply not happen, AMD it should focus their power at their own products which them wasting more on-board card memory than what an NVIDIA card this using at the same game and at the same resolution.

We are not shooting here NVIDIA's legs, we simply trying to get a bit of encyclopedia understanding of what when wrong.
A few years back, Linus made a comment that Nvidia's polling method was precise but less frequent than AMD Radeon's faster guesswork. Nvidia's number was 33microsecondmilliseconds latency, afaik.
This could be related to slow responses to monitored events, wouldn't you say? We are talking about 2.5ghz chips that were previously impossible when these monitoring software first took over.

PS: I indeed think it is as simple as that, vdroop that is occurring quicker than 30 times a second which is above the monitoring resolution. As with cpu overclocking, a higher base voltage, or LLC would further complicate the power requirements. The solution is definitely good, but it has to be inside the frameset of parametrization. Something is voiding the algorithm.
 
Last edited:
D

Deleted member 24505

Guest
Ok then, I just misread or misunderstood something... If that is true, then this might only be a part of the problem, and a big embarassment for card manufacturers!
Did they push the silicon too far from the start? Then, they will have to release updated firmwares with some downclock or something like that...
Sticking with my 1080 for now, I'll wait till the 20 series sells dirt cheap on eBay before pulling the trigger!

Sticking with my 1080 for now, me too. mine does 2140/5670 fine in my custom loop.
 
Joined
Dec 24, 2008
Messages
2,062 (0.35/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
People are having a hard time grasping how many corners AIBS cut with the boards these days

basically every corner they cut they can and its just enough to push the stability envelope past the limit when you are trying to hit that magical 2Ghz marketing number

This text does not make any sense, and from now and on all of you, please use AIBS or AIB acronym at it full form so confusion to be avoided.
a) AIB to refer to 'non reference' graphics card designs.
b) An AIB supplier or an AIB partner is a company that buys the AMD (or Nvidia) Graphics Processor Unit to put on a board and then bring a complete and usable Graphics Card or AIB to market.
 
Joined
Jul 19, 2016
Messages
484 (0.16/day)
Well, Asus went the whole hog and implemented 6 MLCCs in their design, which simply suggests Nvidia partners *knew* about possible weaknesses in this area...
What is the only logical conclusion here? I leave it to you...

There is evidence that it's not just the MLCC to blame. It could be a couple of hardware faults plus Nvidua driver problems on top.
 
Joined
Sep 27, 2020
Messages
7 (0.00/day)
People are having a hard time grasping how many corners AIBS cut with the boards these days

basically every corner they cut they can and its just enough to push the stability envelope past the limit when you are trying to hit that magical 2Ghz marketing number

ever since Nvidia started making there own boards the AIBS have been cutting every corner possible if you want a reliable card, you buy the Nvidia made one(unless you wanna spend the money for the absolute top tier cards like a Strix, Hall of Fame, k1ngpin)

this is a complete reversal from how it used to be

it used to be AIB cards offered more bang for the buck with better overclocking and better cooling this is frankly no longer the case and the short of it is unless Nvidia relaxes some of the restrictions AIBS only continued reason to exist is to make cards for Nvidia

Do you have any evidence of this? I'm trying to look this up and I can't find any information for or against it. I know companies will cut corners where ever possible but it seems in my experience the AIB have always been better performance and I buy a new graphics card every year. I have had gigabyte wind force, assus strix, and msi gaming x over the last 5. Are those cards normally done better? I usually get good temperatures and over clocks with them.
 
Joined
Dec 24, 2008
Messages
2,062 (0.35/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
A few years back, Linus made a comment that Nvidia's polling method was precise but less frequent than AMD Radeon's faster guesswork. Nvidia's number was 33microsecond latency, afaik.
This could be related to slow responses to monitored events, wouldn't you say? We are talking about 2.5ghz chips that were previously impossible when these monitoring software first took over.

At 1996 my first graphic card this was able to do 2D and 25 fps of video and no 3D as this came later.
20 years ago we did complain (me too) that NVIDIA was flooding the market with VGA card releases when the performance at positive scaling was just 12%.
Series TNT and then TNT2 and and and .... more money spend with out real benefit.
Since 2012 I did stop to be a close follower of 3D cards development, I did use the storage ability of my brain at other by far more productive thoughts.

Development of software this is always a second support step, if NVIDIA did not add relative power usage monitor sensors, no one would be able to see power related information's (electrical measurements).
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
Development of software this is always a second support step, if NVIDIA did not add relative power usage monitor sensors, no one would be able to see power related information's (electrical measurements).
The sensors have temporal resolution which is what I'm saying. Props to Nvidia again, I used to go heavy on overclocking methods. This one, throttle near the voltage threshold limit, is the best(none saves power while still at the maximum performance), but the drawback is you have to act quick.
If only we still had @The Stilt around.

The method I would suggest at a big die gpu is still the same - try incrementally at 50MHz steps and see if there is a cutoff point where this behaviour starts. 1600 MHz, 1650 MHz, 1700 MHz... I'm not a metrologist which I highly respect as a science, but I can at least go down to the minimum resolution(1 MHz) until problem begins.
I used to combine ATi Tray Tools since not most software came with its error counter. I would monitor the gpu frequency time log in its overclock test and watch for the card to spit out errors in ATT(you had to select osd error check to monitor it live on the lower corner).
It was great fun, but such old software has a habit of damaging your card when continuously running at 5000fps, lol.
I cannot be of much other help outside of pointing out which software I used to get a frame of reference.
I hope they fix it because it rekindles the good old times I spent dialing just single digits in MSI Afterburner.
 
Joined
Dec 24, 2008
Messages
2,062 (0.35/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
The sensors have temporal resolution which is what I'm saying. Props to Nvidia again, I used to go heavy on overclocking methods. This one, throttle near the voltage threshold limit, is the best(none saves power while still at the maximum performance), but the drawback is you have to act quick.
If only we still had @The Stilt around.

The method I would suggest at a big die gpu is still the same - try incrementally at 50MHz steps and see if there is a cutoff point where this behaviour starts. 1600 MHz, 1650 MHz, 1700 MHz... I'm not a metrologist which I highly respect as a science, but I can at least go down to the minimum resolution(1 MHz) until problem begins.
I used to combine ATi Tray Tools since not most software came with its error counter. I would monitor the gpu frequency time log in its overclock test and watch for the card to spit out errors in ATT(you had to select osd error check to monitor it live on the lower corner).
It was great fun, but such old software has a habit of damaging your card when continuously running at 5000fps, lol.
I cannot be of much other help outside of pointing out which software I used to get a frame of reference.
I hope they fix it because it rekindles the good old times I spent dialing just single digits in MSI Afterburner.

Well I am not an electrical metrologist either, they respect the rules of science and they never ever do overclocking. :D
My current AMD HD5770 this has an internal scanning method so to determine max OC limits all by it self.
I never care to learn the scaling up in Megahertz steps.

RTX 3000 and what ever will follow after it, this is a different animal, I am sensing that Major OC software and utilities these will not be required any more.
This new hardware it is now made to restrict careless handling from the side of users.
Its a new car with no gears stick, and with a limiter at allowed top speed.
Anyone whom disagree with the new reality he should never get one.
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
My current AMD HD5770 this has an internal scanning method so to determine max OC limits all by it self.
I never care to learn the scaling up in Megahertz steps.
Thanks for making it easier for me to give an example.
Since this is about power delivery, it has to "match" the power requirement of the normal operating bevaviour.
Since the testing utility is good, but doesn't test at the same temperature ranges an overclocked case can rise up to, we'll have to reserve ourselves to more moderate speeds than what the utility can have us believe.
This is why I mentioned ATT, it follows the same fan temperature curve in the normal operating behaviour.
This is mainly about vdroop and temperature beyond that. The way I used it was, I would start the 3d renderer, let go of tuning a little bit, and wait until the card reached previously noted temperature points where it destabilized and switch to 'manual' from there(never expected ATT to sound this cool).
The leakiness brought about by temperature, inclining power requirements due to faster shader operation would get you a sweet spot where this cutoff was too easy to pinpoint. From there, I would play either with the fan curve, or voltage, or if on the cpu with LLC(you couldn't pick its temperature gradient if you didn't log everything up until here), but basically I find it more exciting to bust cards using this method than to use them daily, lol.
 
Top