- Joined
- May 8, 2016
- Messages
- 1,909 (0.61/day)
System Name | BOX |
---|---|
Processor | Core i7 6950X @ 4,26GHz (1,28V) |
Motherboard | X99 SOC Champion (BIOS F23c + bifurcation mod) |
Cooling | Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull) |
Memory | Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V) |
Video Card(s) | Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF]) |
Storage | WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode) |
Display(s) | LG 27GP950-B |
Case | Fractal Design Meshify 2 XL |
Audio Device(s) | Motu M4 (audio interface) + ATH-A900Z + Behringer C-1 |
Power Supply | Seasonic X-760 (760W) |
Mouse | Logitech RX-250 |
Keyboard | HP KB-9970 |
Software | Windows 10 Pro x64 |
There is no "work best", because each card/GPU WILL have different performance based on how good quality of a die it got.One thing needs to be taken into account to make this button work its best though: so that things don't become unstable during their warranty periods (become "broken" to most people), as time passes, extra voltage over the minimum required at manufacture is needed
Up to this point what has been done by AMD/Intel/nVidia/all semiconductor companies, is the use of a voltage offset. They ask: What's required for stability of the chip while running at its peak temperature before throttling (usually 100deg C)? 1.000V. So what's its voltage set to? 1.100V. This ensures that at month 6, when 1.007V is required, an RMA isn't happening.
Instead of doing this, there is no reason why voltage can't be optimized to increase over time depending on hours powered on, temperature, and utilization. To keep things REALLY simple they could even just go by time since manufacture and be really conservative with their ramp up - There would still be a span of like two years where they could ramp up from 1.000V to 1.100V - TONNES of power would be saved worldwide, even from that.
Manufacturer(s) will be sued by not delivering marketing promises, because clock speeds/performance aren't at level they were claimed to be at for all cards of the same name.
Offset is partially true, they simply use highest stable frequencies tested at highest "rated" (by TSMC/Samsung/etc.) voltage numbers (because guys in marketing like bigger numbers).
My solution :
Make cards with max. boost limited to lower frequency (1.5 - 2GHz) and voltage of 0.8-0.85V, with additional V/F options being available through "OC" mode that can be enabled via driver_option/AIB_program/3rd-party_OCtool. Maybe display appropriate warning about massive implications (for longevity/thermal/noise/long-term-performance/etc.), BEFORE using those higher rated modes (or just do a "Uber" vBIOS switch on PCB ?).
BTW : ALL NV Boost 3.0 cards are capable of this (ie. since Pascal), but NV simply likes to "push things to 11", because... reasons.
Proof?
ALL GPUs mentioned contain stable, and manufacturer tested lower frequencies/voltage combinations a particular card can use. It's in the table, and you can "lock" GPU to any of them via Afterburner (for example).