• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Rumored To Launch RTX 3090 SUPER in January 2022

As long as the prices remain outrages, all GPU releases are just cool stories to me
Hi,
Yeah only cool if you have a bestbuy that sells them near by too lol
Otherwise you'll be on a road trip camp out.
 
Hi,
Yeah only cool if you have a bestbuy that sells them near by too lol
Otherwise you'll be on a road trip camp out.
Or you can decide to be happy with what you have. There's lots of folks out there crying with 980s and 1070s whose only problem is not getting their two-yearly shiny new GPU fix that they actually don't need.
 
All high end SKUs 3090 (Asus Strix, MSI SuprimX, Zotac Holo AMP, etc..) already use 420W+ at stock :roll:
And here I am (as are many others) undervolting for massive efficiency gains. I've gotten to around -9% of stock performance for -30% energy used, or ~stock performance at -~20% energy.
 
Low quality post by Hargema
Jensen you suck you rat, you're fortunate we have been born in the late 1990s.
 
And here I am (as are many others) undervolting for massive efficiency gains. I've gotten to around -9% of stock performance for -30% energy used, or ~stock performance at -~20% energy.

And capping FPS too and watch 3090 sip power like a mouse :D
capp.png
 
And here I am (as are many others) undervolting for massive efficiency gains. I've gotten to around -9% of stock performance for -30% energy used, or ~stock performance at -~20% energy.
Fun facts with my RTX 2070:
A power target of 71% (125W TDP instead of 175) gives me around -7% performance compared to stock, which is barely noticeable even with an on-screen fps counter.
A power target of 114% (200W TDP instead of 175) gives me no observable or measurable performance gain, only more heat and noise.

This makes me question manufacturers' choice of TDP. Does our new PC gaming world really revolve around that extra 1% so much? :confused:
 
This makes me question manufacturers' choice of TDP. Does our new PC gaming world really revolve around that extra 1% so much? :confused:
Yes :)
And much more so of the competitor has that 1%.
If you want a specific recent example, look at 6900XT "XTXH" cards. - extra 30% and 100W over 6900XT for 5-7% performance.

Clocking and targeting GPUs beyond the nice place on the efficiency curve has been AMD thing for a long while. With Turing and especially with Ampere, Nvidia followed suit.
As with your example lowering power target shows that quite clearly.

Undervolting today is a much more complex question. Generally undervolting guides for something like Ampere simply set the maximum voltage and frequency and are done with that. Actual undervolting seems to be a smaller part of the process.
 
Last edited:
Why ? At least non-LHR you could make some money back, with LHR you still pay the same amount but for an inferior product.
Gamers will be gaming, not mining. At this point you will earn peanuts from GPU mining unless you have alot of GPUs that run 24/7 using custom firmware anyway.

You might as well buy upcoming coins and hope they explode instead of wasting electricity. That's what I did. Did not mine for one minute, yet earned alot of money from bitcoin, ethereum and dogecoin so far. However it's a gamble and there's many ways to make money.
 
Yes :)
And much more so of the competitor has that 1%.
If you want a specific recent example, look at 6900XT "XTXH" cards. - extra 30% and 100W over 6900XT for 5-7% performance.
That's what I call a waste. Something's seriously wrong with the gaming industry from this perspective.
 
This makes me question manufacturers' choice of TDP. Does our new PC gaming world really revolve around that extra 1% so much?
Yeah It seems like a combination of things, going for those last few % to beat out a competitor or advertise larger gen-on-gen increases. Also, the frequency/voltage table would need to be suitable for 100% of cards that come off the production line, pushing them to use an even higher voltage per frequency in order to guarantee stability.

Overclocking used to be a lot of fun, getting 15..20...25%++ out of CPUs and GPU's in the past with little effort. Nowadays the vendor pushes them pretty much to the limit of their capability, where you're fairly lucky to get 10-12% extra under non-extreme conditions like LN2.

So I scratch my tweaking / overclocking / optimizing itch by undervolting, keeping the most performance I can (perhaps even a few % points more) while drastically reducing power draw and pumping up efficiency to fun heights.

I'd love to see a comparison of Pascal v Turing v Ampere, all stock vs all UV'd to get a glimpse of stock efficiency vs UV's efficiency gains gen-to-gen. Would be bloody hard to pull off though, silicone lottery and all.
 
Yeah It seems like a combination of things, going for those last few % to beat out a competitor or advertise larger gen-on-gen increases. Also, the frequency/voltage table would need to be suitable for 100% of cards that come off the production line, pushing them to use an even higher voltage per frequency in order to guarantee stability.
Or they could drop a few MHz from the boost table to be able to decrease the voltage significantly, resulting in much lower power consumption at the cost of a few frames per second. But maybe it's just me - my e-peen doesn't grow in correlation with the extra 1% performance squeezed out of my hardware.

Overclocking used to be a lot of fun, getting 15..20...25%++ out of CPUs and GPU's in the past with little effort. Nowadays the vendor pushes them pretty much to the limit of their capability, where you're fairly lucky to get 10-12% extra under non-extreme conditions like LN2.
This is the side of it that I'm actually happy about. Getting basically maximum performance right out of the box is brilliant. :) I love building PCs, but I also love it when they just work.

I'd love to see a comparison of Pascal v Turing v Ampere, all stock vs all UV'd to get a glimpse of stock efficiency vs UV's efficiency gains gen-to-gen. Would be bloody hard to pull off though, silicone lottery and all.
I don't know about UV, but at stock, there's very little change imo - the 1080, 2070 and 3060 all perform similarly (if you don't count RT and DLSS), consuming about the same amount of power.
 
Or they could drop a few MHz from the boost table to be able to decrease the voltage significantly, resulting in much lower power consumption at the cost of a few frames per second. But maybe it's just me - my e-peen doesn't grow in correlation with the extra 1% performance squeezed out of my hardware.

4K gaming just put higher load on the core, so the freq/voltage end up on the lower part of the Perf/Power curve.
what_is_nvidia_max_q_efficiency.jpg


For example, at 1440p with the stock power limit, my core clocks is at 2145mhz/1.0V, but @4K and stock power limit, the core end up at 1920mhz/850mV, at this V/F point the Perf still scale almost linearly with power and it's a waste not to give the core some more power to work with.
 
4K gaming just put higher load on the core, so the freq/voltage end up on the lower part of the Perf/Power curve.
View attachment 219883

For example, at 1440p with the stock power limit, my core clocks is at 2145mhz/1.0V, but @4K and stock power limit, the core end up at 1920mhz/850mV, at this V/F point the Perf still scale almost linearly with power and it's a waste not to give the core some more power to work with.
Interesting. I wonder why that is.
 
Interesting. I wonder why that is.
An educated guess would be that the balance of the load on different units or subunits changes based on resolution (among other things) and something that is used more on 4K runs at higher power cost. The exact culprit is probably heavily load-dependent.
 
Will i be able to pick up at 3080 for $650 like I did my 1080ti in 2018? It makes no difference what they release when there is nothing we can buy at reasonable prices.
 
Back
Top