Wednesday, March 7th 2012
![NVIDIA](https://tpucdn.com/images/news/nvidia-v1739475473466.png)
GeForce GTX 680 Features Speed Boost, Arrives This Month, etc., etc.
Here are some key bits of information concerning the upcoming GeForce GTX 680, a performance single-GPU graphics card based on the 28 nm GK104 GPU by NVIDIA. The information, at face value, is credible, because we're hearing that a large contingent of the media that finds interest in the GPU industry, is attending the Game Developers Conference, where it could interact with NVIDIA, on the sidelines. The source, however, is citing people it spoke to at CeBIT.
First, and most interesting: with some models of the GeForce 600, NVIDIA will introduce a load-based clock speed-boost feature (think: Intel Turbo Boost), which steps up clock speeds of the graphics card when subjected to heavy loads. If there's a particularly stressing 3D scene for the GPU to render, it overclocks itself, and sees the scene through. This ensures higher minimum and average frame-rates.
Second, you probably already know this, but GK104 does indeed feature 1,536 CUDA cores, which lend it a strong number-crunching muscle that helps with shading, post-processing, and GPGPU.
Third, the many-fold increase in CUDA cores doesn't necessarily amount to a linear increase in performance, when compared to the previous generation. The GeForce GTX 680 is about 10% faster than Radeon HD 7970, in Battlefield 3. In the same comparison, the GTX 680 is slower than HD 7970 at 3DMark 11.
Fourth, the NVIDIA GeForce GTX 680 will very much launch in this month. It won't exactly be a paper-launch, small quantities will be available for purchase, and only through select AIC partners. Quantities will pick up in later months.
Fifth, there's talk of GK107, a mid-range GPU based on the Kepler architecture, being launched in April.
Next up, NVIDIA is preparing a dual-GPU graphics card based on the GK104, it is slated for May, NVIDIA will use Graphics Technology Conference (GTC), as its launch-pad.
Lastly, GK110, the crown-jewel of the Kepler GPU family, will feature as many as 2,304 CUDA cores. There's absolutely no word on its whereabouts. The fact that NVIDIA is working on a dual-GK104 graphics card indicates that we won't see this chip very soon.
Source:
Heise.de
First, and most interesting: with some models of the GeForce 600, NVIDIA will introduce a load-based clock speed-boost feature (think: Intel Turbo Boost), which steps up clock speeds of the graphics card when subjected to heavy loads. If there's a particularly stressing 3D scene for the GPU to render, it overclocks itself, and sees the scene through. This ensures higher minimum and average frame-rates.
Second, you probably already know this, but GK104 does indeed feature 1,536 CUDA cores, which lend it a strong number-crunching muscle that helps with shading, post-processing, and GPGPU.
Third, the many-fold increase in CUDA cores doesn't necessarily amount to a linear increase in performance, when compared to the previous generation. The GeForce GTX 680 is about 10% faster than Radeon HD 7970, in Battlefield 3. In the same comparison, the GTX 680 is slower than HD 7970 at 3DMark 11.
Fourth, the NVIDIA GeForce GTX 680 will very much launch in this month. It won't exactly be a paper-launch, small quantities will be available for purchase, and only through select AIC partners. Quantities will pick up in later months.
Fifth, there's talk of GK107, a mid-range GPU based on the Kepler architecture, being launched in April.
Next up, NVIDIA is preparing a dual-GPU graphics card based on the GK104, it is slated for May, NVIDIA will use Graphics Technology Conference (GTC), as its launch-pad.
Lastly, GK110, the crown-jewel of the Kepler GPU family, will feature as many as 2,304 CUDA cores. There's absolutely no word on its whereabouts. The fact that NVIDIA is working on a dual-GK104 graphics card indicates that we won't see this chip very soon.
105 Comments on GeForce GTX 680 Features Speed Boost, Arrives This Month, etc., etc.
Maybe i'm paranoic.... @.@
But, here’s me thinking… What happened or is happening with a GK110? Why so late? If GK104 came out this great, why not redeploy with a GK110 “death blow” at any price? Or, is it not working out right, how can a bigger die not be working, they can't correct it? ...
They're providing AMD time to engineer and release a re-spin? Something doesn't make sense with this; I mean is it that revolutionary size, performance, efficiency, and price… they just aren't compelled to stand the market on its ear?
1. The larger GPU is obviously going to need a wider memory bus. Nvidia are lagging in memory controller implementation at the present time -hardly surprising since the GDDR5 controller was basically pioneered by AMD. Witness the relatively slow memory clocks for Fermi.
A 384 (or larger) bus width is likely a necessity for workstation, and particularly HPC, and for whatever else GK110 is, it will primarily earn back its ROI in the pro market.
2. Likewise cache
3. Double precision optimization ?
4. Maybe the sheer size of the die is problematic for yield, heat dissipation etc. Not an unknown factor with large GPU's in general and Nvidia's large monolithic dies in particular.
Something is telling me their adding voltage to get the clocks up to compete.
"Speed Boost" come on!! They already have 3 clock profiles now.Why some other kind voltage control unless your worried the damn thing is going to overheat in 3D situation.I can just hear the fan going up and down,up and down :rolleyes:
I hope im wrong but ...... we shall see :shadedshu
Hopefully GK104 clocks well as if it is only a relatively small percentage ahead of a stock 7970 then surly the 7970s with high clocks (1.1ghz+) would be so close or in theory even beat it.
Whatever happens it looks like things could get interesting but in a kind of unexpected way.
As far as the name goes obviously after seeing all the dual mid range GPU cards Nvidia chose to make the 680 just 660 SLI on a chip but the yields failed them so now the 660 is the 680 and GK100 is the 780 when AMD brings out the 89xx cards :p
BTW:
HD 2900 series ....May 2007
HD 3870 series.....Nov 2007
So, not exactly unheard of, even if you use the "same year" terminology rather than a calender year. If we're talking the same architecture, you might want to check on the GF100/GF104 launch timeable. Something to be said for building a brand. Maybe if ATi/AMD had shown more than a passing interest in dev support (GiTG) and pro graphics we wouldn't be looking at this situation.
Still, no pleasing some people....as your avatar proclaims.
Also I love all the "But, but AMD does it too" crap. Some of it isn't even remotely the same. Yet people use it as an excuse for what NVIDIA is doing. Guess what? This thread is about NVIDIA not AMD.
There I bit. Ya happy? Do you really wanna troll me?
The 3GB AMD card is on par (or cheaper) than the 3GB GTX 580 versions. Likewise the 6970 was priced reasonably high at launch (although the premium to move to the 580 was not proportional to it's superiority). The 7970 requires to be priced higher than the previous best performing single gpu card - that is just reality.
As for the 680, if it has a lower production cost (than the 580 had) then it is not unreasonable to assume it will sell at a competitive price. Many reports mention it is an efficient chip, unlike Fermi. If that is the case, it does not need an exhorbitant price tag. NV marketing knows how to sell (for better or worse, ethically) - It is not unreasonable to suggest they release a superior card and use AMD's high pricing to make consumers double take AMD's prices. "Hey look at those AMD douchebags ripping you off" scenario.
As for people harking on about AMD will just release higher clocked cards to 'hump' the 680, that's an invalid point. IF GK104 is efficient and conservatively clocked, then it may also be an overclocking dream - we dont know yet. My 580 can run at 950 (23% overclock). A 7970 at stock is 925, a lot of reviewers topped out at 1125 (TPU review hit 1075). That's a 21% overclock. Okay, so my 580 is a Lightning but the point is the same, overclocking can be done on both sides.
The 680 will also be the contemporary top tier NV card. It doesn't matter if it is not the uber perfoming card of myth. It is NV's top and possibly the worlds top performing single gpu card. If all the reasonable rumours are true, GK110(112, whatever), the daddy Kepler card IS the be all and end all and NV are in no rush with it. They've seen Tahiti and thought, "oh, is that it!" and focussed on the GK104 launch because they know they can beat it. It's a stern possibility that whatever AMD come up with, Big GK will win. Reasoning?
GCN is AMD's new design. They'll evolve their compute design for better or worse to compete with GK. NV have CUDA well under control. They can shrink it onto the current fab process and make it a monster.
I really think this round of gfx cards are little 'offerings'. AMD saying, "oh looky at our new compute stuff" and NV saying, "oh looky at our new efficient card". I think Q4 2012 will be when the real shit hits the fan and both camps make tweaks and redesigns that establish their proper power play.
Oh, Charlie at S/A says TSMC has halted ALL 28nm processes for now due to an issue.
semiaccurate.com/2012/03/07/tsmc-suddenly-halts-28nm-production/
Anyway, all of this is just logical personal opinion. I'm just as eager as all to see the real benchmarks from reviews.
I would expect it to act kind of like AMDs powertune but in reverse.
Unless it dynamically overclocks the bottlenecking parts of the GPU, I don't see how could benefit. I mean, it is clear that it will save power by doing this but power saving always = more latency and reduced perf. Maybe it detects a safe overclock and applies it during games? The only other option is if the card boosts to an unstable long-term clock... but something that is stable for short bursts.
The cards already do power saving when not under load, but this detects extremely heavy load and cranks up the speeds to overcome. For example:
Image you are playing a FPS and someone throws a grenade and there is an explosion. This is an instance of high load, where a normal card would experience a framerate drop(or lag spike). But the GK104 detects this high load and momentarily boosts the clock speed to help mitigate the lag experience.
Using your example, it would be a 750HP engine that has to use a 650HP engine's cooling system due to space constraints, but you can push a button and for a few seconds get 750HP. They already have the "Render 3 Frames in Advance" option, so....
But I think it could be a matter of only taking a frame or two to boost the speed.
Frame 1: This frame is really hard to render.
Frame 2: Speed boost kicks in.
We know the cards are already measuring load, so it probably isn't hard to detect hard to render frames and give a momentary speed boost.
There is probably a time limit too... what if you're playing a game that gives the card an all-round general hard time...