• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4080 Could Get 23 Gbps GDDR6X Memory with 340 Watt Total Board Power

Joined
May 8, 2016
Messages
1,909 (0.61/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
One thing needs to be taken into account to make this button work its best though: so that things don't become unstable during their warranty periods (become "broken" to most people), as time passes, extra voltage over the minimum required at manufacture is needed

Up to this point what has been done by AMD/Intel/nVidia/all semiconductor companies, is the use of a voltage offset. They ask: What's required for stability of the chip while running at its peak temperature before throttling (usually 100deg C)? 1.000V. So what's its voltage set to? 1.100V. This ensures that at month 6, when 1.007V is required, an RMA isn't happening.

Instead of doing this, there is no reason why voltage can't be optimized to increase over time depending on hours powered on, temperature, and utilization. To keep things REALLY simple they could even just go by time since manufacture and be really conservative with their ramp up - There would still be a span of like two years where they could ramp up from 1.000V to 1.100V - TONNES of power would be saved worldwide, even from that.
There is no "work best", because each card/GPU WILL have different performance based on how good quality of a die it got.
Manufacturer(s) will be sued by not delivering marketing promises, because clock speeds/performance aren't at level they were claimed to be at for all cards of the same name.

Offset is partially true, they simply use highest stable frequencies tested at highest "rated" (by TSMC/Samsung/etc.) voltage numbers (because guys in marketing like bigger numbers).

My solution :
Make cards with max. boost limited to lower frequency (1.5 - 2GHz) and voltage of 0.8-0.85V, with additional V/F options being available through "OC" mode that can be enabled via driver_option/AIB_program/3rd-party_OCtool. Maybe display appropriate warning about massive implications (for longevity/thermal/noise/long-term-performance/etc.), BEFORE using those higher rated modes (or just do a "Uber" vBIOS switch on PCB ?).

BTW : ALL NV Boost 3.0 cards are capable of this (ie. since Pascal), but NV simply likes to "push things to 11", because... reasons.
Proof?
ALL GPUs mentioned contain stable, and manufacturer tested lower frequencies/voltage combinations a particular card can use. It's in the table, and you can "lock" GPU to any of them via Afterburner (for example).
 
Joined
Sep 13, 2021
Messages
86 (0.07/day)
Exactly. Unless there's some new earth-shattering technological breakthrough in regards to energy, high performance + cheap will never be possible. Otherwise, we would all be driving sports cars & using high end computers that can get 120+FPS on a 16k display at the cost of pennies per day for the energy consumed. This equates to something like doing WORK requiring the energy of nuclear fusion/fission on a small scale in your own house or car without the need for putting the energy into it. Solar is about as good as it gets, but the costs become astronomical when you consider the hardware & recurrent costs. The laws of physics has us by the balls. :(
Psychology. People always want more. If energy were free, clean, and renewable, the spared resources would flow into more complex hardware, which depletes other resources and will have the same ecological impact. Diogenes is credited with saying: Happy is he who has no needs. He probably lived around 400 BC. in a barrel in Athens, without clothes, living on the leftovers of others. He ridiculed the life of the Athenians, who wasted all their time satisfying needs, most of which seemed useless to him. The word cynic goes back to him.
Back to the topic:
As I mentioned, computing power per Watt is better from generation to generation. About 50-100%, real performance 20-50%. Who wants to save money and energy, buys a cheap entry CPU, limits burst, buys an entry GPU, limits burst/undervolting, and plays games who are older 4-5 years. Or buy a notebook with rtx m3060 or m3070, plug a HD or 4k monitor in, done. Or a console or smartphone game. Its up to everyone, that's freedom. Taxing the energy costs is easier and fairer. No need for more bans and more regulation.
 
Joined
Jan 21, 2021
Messages
67 (0.05/day)
There is no "work best"...


My solution :
Make cards with max. boost limited to lower frequency (1.5 - 2GHz) and voltage of 0.8-0.85V, with additional V/F options being available through "OC" mode that can be enabled via driver_option/AIB_program/3rd-party_OCtool. Maybe display appropriate warning about massive implications (for longevity/thermal/noise/long-term-performance/etc.), BEFORE using those higher rated modes (or just do a "Uber" vBIOS switch on PCB ?).

BTW : ALL NV Boost 3.0 cards are capable of this (ie. since Pascal), but NV simply likes to "push things to 11", because... reasons.
Proof?
ALL GPUs mentioned contain stable, and manufacturer tested lower frequencies/voltage combinations a particular card can use. It's in the table, and you can "lock" GPU to any of them via Afterburner (for example).

edit/aside: After a bit, it came to my mind that you might have thought I was suggesting more power. I don't, you'll understand after you read my post below. Personally, a lot of the time I run my 3080 at 0.900V and 1800MHz with no boost. During the most demanding scenes I get crashes if the clock is set much past 1850MHz at this power saving but probably not completely optimal 0.900V (this state made possible with Asus's GPU Tweak II and its "Always 3D clocks" button). If Gigabyte shipped my card as I run it currently, it might last a year or a bit more of heavy use, but then you'd

Since Intel's introduction of turbo boost, even chips of the same types have had different VIDs. I believe GPUs are the same. If not, they need to be (and so they will) catch up ---they take so much more power than CPUs it's a must. First a bit of background - for those who don't already know, you may need to look up "CPU VID" for more in depth knowledge than what I describe next (which is just the minimum required knowledge to understand what I'm describing in the last part of this post). VIDs are voltages in a table that a CPU gives to motherboards so they send the the right voltages at all the operating frequencies it's capable of running (frequencies it's designed to run, so all base clock frequencies and turbo frequencies get the right voltages. Capable of reaching with overclocking is not covered).

I'm talking about a small modification to this standard, nothing else, which is the already present mode of operation. All that needs to be done is to, starting at the beginning of operation (day 0) with a voltage set just above the absolutely lowest required voltage for stability stable voltage (which is already determined by manufacturers so they can set the VIDs the way they do currently (too high). From that day forward, on a monthly basis using a formula with well known and defined variables, very small voltage increases would be applied to all the voltages in the table to ensure reliable operation is maintained over the product's service life. How much lower could voltages be set at the beginning? Well, like you said, all chips are different. An extreme example for you: my old 2500K had a VID of 1.416V turbo boost of 3700MHz. With that voltage it's Prime95 small FFT stable indefinitely at 4800MHz! With just 1.25V it's prime stable at 4300MHz! I think it needs 1.10V for 3700MHz. That's 0.3V! I don't think most 2nd gen Intel chips have a 0.3V discrepancy, but I bet new it's 0.2V, and after a decade or continual heavy use at high temps, definitely less than 0.1V. Conversely, my 9600K's a VID of 1.16V is only 0.07V above what's needed. Maybe degradation is slower with the lower voltages with new manufacturing process, I don't know (nobody but Intel does because proprietary (and I hate that). But all is not lost! We know it's lower now and chips are lasting just as long, so it has to be something.

We don't know all of the variables or the exact values of variables we've inferred the existence of through observations, but we don't need to! All that's needed for success is for the method I described getting back to the people who do know them! It'll help them with temperatures, power density, a bunch of stuff I don't have to get into. So they should do it.

Environmentally, worldwide, a LOT. And I mean A LOT LOT, of power would be saved.

How much is a lot?

Well, as I've posited, a lot of information isn't easy to come by, but even in the near term, it's a ridiculously high number: tens of millions of devices, all laptops/desktops/servers could be improved. Even if average power consumption was reduced by just 10% during the first 3 years of service? That's a LOT. Someone who came up with similarly awesome strategies to help the environment might have received things like prizes throwing peace signs. Someone (I'm looking at you random nvidia/intel/amd mid level forum spies) should pass this up the chain. I'm not a climate activist or anything, but I don't like waste - especially waste that's completely unnecessary.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Just a low level and/or mid level power savings over reference design clocks bios switch on all cards would do wonders to save a ton of additional power. It wouldn't really cost much different if they just scrap one of the numerous display outputs in favor of the better environmental option. Most cards have like 3-4 display outputs as it is and with today's panels not many are even doing triple display setups these days due to 4K's and UW's being more appealing to most because bezels blow for immersion and are distracting relative to a seamless image.

Something like 25% to 50% power savings options with better efficiency tuning of voltages since you can tighten voltage a bit when you drop frequency if you aren't boosting as high. In fact you can drop boost speeds by like 10% and bump up base clock speeds 10% and better off for efficiency because you can also reduce voltage further if you reduce boost speeds. That actually minimizes the performance deficit while still leveraging higher efficiency to a good extent. You're basically shifting the two voltage goal posts in exchange for performance and efficiency trade-offs.
 
Joined
Jan 27, 2015
Messages
454 (0.13/day)
System Name Marmo / Kanon
Processor Intel Core i7 9700K / AMD Ryzen 7 5800X
Motherboard Gigabyte Z390 Aorus Pro WiFi / X570S Aorus Pro AX
Cooling Noctua NH-U12S x 2
Memory Corsair Vengeance 32GB 2666-C16 / 32GB 3200-C16
Video Card(s) KFA2 RTX3070 Ti / Asus TUF RX 6800XT OC
Storage Samsung 970 EVO+ 1TB, 860 EVO 1TB / Samsung 970 Pro 1TB, 970 EVO+ 1TB
Display(s) Dell AW2521HFA / U2715H
Case Fractal Design Focus G / Pop Air RGB
Audio Device(s) Onboard / Creative SB ZxR
Power Supply SeaSonic Focus GX 650W / PX 750W
Mouse Logitech MX310 / G1
Keyboard Logitech G413 / G513
Software Win 11 Ent
A lot less power than what it was rumored a couple of months ago, but still it's a no go for me. Anything more than 250W is simply absurd especially with the current electricity price in the UK.
 
Last edited:
Joined
Jan 21, 2021
Messages
67 (0.05/day)
Hopefully it's true, a lot of pointless power consumption will be curtailed. 20% less power for 3% less frames - sounds good to me! There still need to be options for people who want to overclock though. It's annoying that my 3080, no matter what I do, won't draw more than ~370W - its average with FurMark is just over 360.
It's not really a problem right now because I run 1850MHz with 0.900V, but in the future when it's not a top end card? I'm sure I'll want to overclock a bit more

edit: I wasn't even thinking of the high electricity prices! That must be why they're doing it... Maybe they'll have a button to clock it like a mobile GPU too!
 
Joined
Apr 3, 2013
Messages
50 (0.01/day)
Is there any chance some models of the 4080 will have atx 2.0 power interface? I don't want to buy a new PSU
 
Top