• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce GTX 680 Kepler 2 GB

evga's are back up on newegg for sale get them while they are hot!

EVGA, ZOTAC, Asus, Gigabyte
HURRY!!

.............meanwhile, hd 7970 .........$550......:(
 
Its nice to see how it competes with the 7970, yet the AMD Radeon HD 7970 Overclocked to 1.70 GHz Core, 8.00 GHz Memory still beats the 680

1.70 GHz core?
 
That is a pitty, if you would have read just a little bit further you would have read that AMD does the same thing, and has been for a couple generations at least.



Actually, yes they do, read the rest of this review, W1z talks about it. In fact their solution is entire driver based, using estimates to guess power consumption and throttle cards. They've been doing it since at least the HD6000 series.

So nVidia uses actual hardware monitors to monitor power draw and throttles based on those readings, and AMD uses software to guess power draw based on current load on the card, clock speeds, voltages, etc. and then throttles based on those calculations. I wonder which has worse latency...

Actually, you've got it completely the wrong way around. AMD uses an on-chip power controller that simply drops power according to GPU load, and DOESN'T rely on software in any way, shape or form. Nvidia uses the drivers to throttle power to the card by on-board power limiters based on pre-set limits in the drivers. That makes a world of difference in how it works and why AMD's does and Nvidia's doesn't.

NVIDIA is using three INA219 power monitoring chips on their board, which provide the driver with power consumption numbers for 12V PCI-Express and the two 6-pin PCI-E power connectors. The driver then decides whether the board is running below or above the NVIDIA configured power limit. Depending on the result of the measurement, the dynamic overclocking algorithm will pick a new clock target roughly every 100 milliseconds (three frames at 30 FPS). AMD's power limiting system works slightly different. It does not measure any actual power consumption but relies on an estimate based on current GPU load (not just %, but taking more things into account). This makes the system completely deterministic and independent of any component tolerances and reduces board cost. AMD uses a hardware controller integrated in the GPU to update measurements and clocks many times per frame, in microsecond intervals, independent of any driver or software.
 
EVGA, ZOTAC, Asus, Gigabyte
HURRY!!

.............meanwhile, hd 7970 .........$550......:(

lol EVGA, Gigabyte are now sold out again... aww
 
i think its a nice idea to include the 78xx series in the benchmarks.

@

anyway, nice review and thanks!!!
 
lol EVGA, Gigabyte are now sold out again... aww

I sat there for about 2 minutes hit refresh and they were already sold out. Geeze these people are serious about their cards haha.

On a side note i noticed evga is not offering a lifetime warranty on the 680s what's up with that?
 
Good work W1zz...
While, I've been trying to check other reviews (in between doing work). Several things:

- Nvidia now appears to have absolutely changed the game as the Boost found untapped potential without any reviews seeing any glitchy-ness during actual play; although most test aren’t covering real game play. While what the Adaptive V-Sync was different than Turbo Boost but near shown how it impacts FpS?

- I thought that first side saying a “new class of enthusiast card” because the 195W/2-pin GTX680 was misleading to say 7970/580 required a 6 & 8pins and had a TDP of 250W. That’s more a issue for the GTX680 as just because Nvidia can’t/doesn’t offer the old stratosphere OC’s and has it fitting with no way to turn off dynamic clocking so you get a algorithm will" _try_ to respect what you'd like it to do". Such card aren't "bad" because they permit options that aren't there for the GTX680. A 7970 it could easily make-due with two 6-pins and a much lower TDP had AMD restrain what enthusiast could expect. Just look at the power consumption if the CSN didn’t kick in under maximum load they wouldn’t be all that different.

- I see why they priced it $50 less it would’ve basically “one-up-man-ship” a 7970 and with that alone not a great wow. I think when at 2650x the 7970 has more value as when in games where it matters in (low Fps) the 7970 holds more advantage look a Metro, A&P, both Crysis titles, Shogun or even Skyrim, while we need to see how bigger resolutions 3-panels performs.

- I don’t really see the efficiency considering the means of using the Turbo Boost "Clock-Speed-Nanny" (CSN) and a small chip. I thought the whole CSN was going to work more at the Cuda-core level also; I haven’t got any reviews outlining such details.

- Improper case airflow will curtain your performance, most folk won’t even realize.

- There's still a bunch of things to determine here, but for most general operators this is nice cause it promotes plug-n-play and work with many PSU’s and cases… Really idiot-proofs this "new level enthusiast".

- I don’t know if we’ll see any price war from this, 7970 might see more rebates, but until TSMC get’s things moving there’s not enough volume for either to go running scared.

- Now Nvidia needs to keep releasing cards like this down the price structure to really make an impact. Can they continue releasing lower cost sku’s with Clock Boost PCB’s/technology at the real mainstream price point’s. Because the whole lower cost chip/clock boost is what got them to eke-out over the 7970. If they can't this will be ho-hum in 4 months.
 
Haven't bought green team in years. Considering it!
 
Nice card but it has that 300€ card feeling all over it. PCB looks quite empty, vrm weakened, 2x 6pins...

Now, where is my GTX 690? Doesn't matter if its 2x GK104, I take that too if its priced reasonably.
 
Impressed with 680. NVIDIA finally figured out how to make a power efficient high end video card, and at a reasonable price.

But I'm still not spending $500 on a video card. If I were buying right now I'd go for a 7870 priced closer to $300. But if I were in that situation, maybe best to wait and see what GTX 670 looks like.
 
Actually, you've got it completely the wrong way around. AMD uses an on-chip power controller that simply drops power according to GPU load, and DOESN'T rely on software in any way, shape or form. Nvidia uses the drivers to throttle power to the card based on pre-set limits. That makes a world of difference in how it works and why one does and Nvidia's doesnt.

1.) So you are admitting that AMD does the same thing then?

2.) The chip on the GPU only measures those figures, the driver then uses them to estimate power consumption and drops the clocks.

W1zzard said:
NVIDIA's system measures actual card power draw via dedicated circuitry, AMD estimates it based GPU load percentage counters that watch the important functional units in the ASIC. Without the requirement for dedicated circuitry AMD's solution is more cost effective because it is a software solution. Software in this context means driver and mostly the SMC microcontroller that has been present inside AMD's GPU silicon for several generations doing clock, fan and thermal management, even though AMD claims changes were needed in Cayman for this system.

Again, go read more detailed explanation that W1z gives. The chip on the GPU is nothing more than a controller chip that was already present on AMD's GPUs that reads the data, the driver is still what is doing the actual calculations for power consumption and clock/voltage adjustements.

But I'll stop arguing with you now, because I know you know it all, and AMD definitely "HAS NEVER used a driver to forcefully control power to the card", even though I just showed you they have/do.
 
Last edited:
Nice card but it has that 300€ card feeling all over it. PCB looks quite empty, vrm weakened, 2x 6pins...

Now, where is my GTX 690? Doesn't matter if its 2x GK104, I take that too if its priced reasonably.

What matter is it successfully competes with 7970 wich is more expensive and thats make the price not die size ;if 7970 would have been cheaper this would to.
 
If this is the power of GK104, I cannot wait to see what GK110 can do.
 
Finally some good product from the green side! A nice way to force AMD drop price. However I am not very impressed. So basically the card is already being pushed to its limit while only marginally beat stock 7970. Well, we all know how much 7970 can OC.

Now AMD drop the damn price so I can get some 7970 cheaper. :)
 
What matter is it successfully competes with 7970 wich is more expensive and thats make the price not die size ;if 7970 would have been cheaper this would to.

I agree, I just can't justify dropping 500 for either card right now. GPU is perfect for dual GPU card so it left me drooling with the idea of 690 already. :D
 
Reading this review was a real treat, thank you Wizz :D. Awesome review which explained every detail in depth, cant ask for more.
 
great review wizz :respect:

WTS my left kidney! anybody? j/k

great job green team but i think ima keep my gfx till next series :D
 
Anyone else noticed that the 3d stock clock is 1110.5?
gpuz%20gainward%20gtx%20680%20load%20temp.gif
 
As for those saying 7970 overclocks way more take a look at this from guru3d.com:
"on average our card was managing 1250 MHz without any kind of voltage tweaking perfectly fine. The 10K 3Dmark 11 score is certainly testimony of that."

1250 WITHOUT voltage tweaking.
 
As for those saying 7970 overclocks way more take a look at this from guru3d.com:
"on average our card was managing 1250 MHz without any kind of voltage tweaking perfectly fine. The 10K 3Dmark 11 score is certainly testimony of that."

1250 WITHOUT voltage tweaking.

Because it Auto does that for you...
 
Because it Auto does that for you...

I don't know if i got it wrong, but that's what i understood after reading this:

"Interesting to see is that the feature maintains itself while overclocking. We overclocked the card to roughly 1250 MHz and even then the Dynamic Clock Adjustment technology kicks in, but here's where it will often clock down a little bit. overall though it did not hinder the overclocking experience and on average our card was managing 1250 MHz without any kind of voltage tweaking perfectly fine. The 10K 3Dmark 11 score is certainly testimony of that."

So, if they voltage tweak the card i think 1350Mhz would be attainable.
 
1.70 GHz core?

yeah the GTX680 while on LN2 did 1.8GHZ.

Clocks like that don't matter for about 90% of people though. the most common max clocks on these cards seem to be 1200mhz to 1300mhz

There some variables you have to consider when comparing cards when overclocked. Mainly its whether its a cherry picked card or not. most cards won't overclock the same.

Heres a screenshot posted on facebook from Evga of the 680 doing 1400mhz. Also I think if its possible nvidia could include the option to turn off the dynamic clock thing through drivers. Its just a thought.

E18375-GTX680-AIR.jpg
 
Last edited:
Dear W1zzard,

You are really doing great tests, the performance summary and for overall awesome. My only concern is the 3DMark11 tests. It would be much better to see the points in default performance mod, because these FPS numbers means me nothing. I cannot compare with my and other systems.
 
Back
Top