Thursday, March 8th 2012
GK104 Dynamic Clock Adjustment Detailed
With its GeForce Kepler family, at least the higher-end parts, NVIDIA will introduce what it calls Dynamic Clock Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load. The approach to this would be similar to how CPU vendors do it (Intel Turbo Boost and AMD Turbo Core). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically, is.
There is quite some confusion regarding NVIDIA continuing to use "hot clocks" with GK104, the theory for and against the notion have been enforced by conflicting reports, however we now know that punters with both views were looking at it from a binary viewpoint. The new Dynamic Clock Adjustment is similar and complementary to "hot clocks", but differs in that Kepler GPUs come with a large number of power plans (dozens), and operate taking into account load, temperature, and power consumption.The baseline core clock of GK104's implementation will be similar to that of the GeForce GTX 480: 705 MHz, which clocks down to 300 MHz when the load is lowest, and the geometric domain (de facto "core") will clock up to 950 MHz on high load. The CUDA core clock domain (de facto "CUDA cores"), will not maintain a level of synchrony with the "core". It will independently clock itself all the way up to 1411 MHz, when the load is at 100%.
Source:
VR-Zone
There is quite some confusion regarding NVIDIA continuing to use "hot clocks" with GK104, the theory for and against the notion have been enforced by conflicting reports, however we now know that punters with both views were looking at it from a binary viewpoint. The new Dynamic Clock Adjustment is similar and complementary to "hot clocks", but differs in that Kepler GPUs come with a large number of power plans (dozens), and operate taking into account load, temperature, and power consumption.The baseline core clock of GK104's implementation will be similar to that of the GeForce GTX 480: 705 MHz, which clocks down to 300 MHz when the load is lowest, and the geometric domain (de facto "core") will clock up to 950 MHz on high load. The CUDA core clock domain (de facto "CUDA cores"), will not maintain a level of synchrony with the "core". It will independently clock itself all the way up to 1411 MHz, when the load is at 100%.
56 Comments on GK104 Dynamic Clock Adjustment Detailed
So what 3-4 different Sku's with varying levels of Dynamic simulation? How's that effect OC'ing... Sounds like they took OC'n away or you get one or the other. This is just down-clocking to save power and provides a boost when needed and saves face cause maybe the GK104 isn’t as good as they/we were told.
Man I hope they got the bugs out and response is right? This is a big what if it... doesn't work flawlessly do you want to be the first to drop $450-550 on their software smoke and mirrors?
Humm... I think the smart ones will maintain/jump to conventional - established way of doing it, less be their guinea pig.
Bend over for their version of a pipe dream! :twitch:
Could you not merge this with the other thread about the same topic? Would be nice to have all the info in the same place :o
It's like if they let the card run high FPS, it can pull too much current? I mean, there's no point in running 300 FPS in Unreal or Quake 4, and in these apps, a slower GPU would still give reasonable framerates when downclocked. So they are saving power by limiting FPS?
I HAZ CONFUUZ!!!
You're focusing for that head shot... (dynamically dumps clocks)... boom you'r dead. :eek:
Here's my speculation:
If they increase clocks too far, it eats too much power and/or is unstable, but its performance is already good at lower power, so to give a turbo *oomph* and increase overall performance, they use this feature in some kind of burst mode.
There's been talk of doing this kind of thing with mobile processors in order to momentarily increase performance when needed.
There's nothing bad about the idea behind it.
As to whether it allows you to overclock, there's no information you can go on to make any kind of comment based on overclockability. So, why trash the thread with "OMG! it's horrible! the worst feature ever! Fail1!!" comments
I didn't trash the thread there wasn't one... I never said it's horrible! the worst feature ever! But are you subconsciously thinking that?
But maybe I should have clarified, and extended the comment to include all (three?) threads discussing the new NV gpu. Too much juvenile idiocy gets posted, that was my exasperation with it all.
Like the information that its 10% faster then an HD7970 in BF3, that tells us something because we know how fast teh HD7970 is in BF3, again these cores mean nothing at this point.
This card needs to run the gauntlet of tests asap.
With its yeild issues on GeForce Kepler family, at least the higher-end parts (theres one part lets be real here, the lower SKU's are just more useless then the higher SKu's), NVIDIA will introduce what it calls Dynamic GPU stability saveing Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load and temperature. The approach to this would be similar to how CPU vendors do it ,ie useless in most scenarios (ie useless in most scenarios and turned off for those in the know). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically
look if i make a 1.1ghz GPU tommorow or tonight and then setup its firmware to normally run at 0.9Ghz but boosting to 1.1(11) during heavy load spells until it heats up to a certain point then drops back down to 0.9,what use is that ive an amp that goes to 10, i dont want or need one thats got 11 on the effin knob, i just want a bigger more noise and scribing 11 on the dial dosnt make it louder , Im seeing this as yet another underhanded way of selling off less then ideal silicone,, simples
I suppose Nvidia isn’t going to be releasing full bore (could be just one top Sku) on opening day, so if there are issues they can minimize and have damage control. I do hope they have up’d there CS or the AIB’s have got up to speed on the particulars this new Dynamic Clock Adjustment operation.
We have clocked it a bit lower as std in 3d but dont worry on occasion ,when we decide we will use a profile that will allow a slight oc untill heat or power overcomes its stabillity< but then that would be the truth and not very good PR:rolleyes:
wizz's review may prove them right and/or it may still oc nicely, But i doubt it, they clearly need to get rid of some iffy stock in my eyes.:wtf:
ill STILL be getting one either way as hybrid physx is worth the effort to me but marketing BS and fanboi,istic backing grates my nerves,, carry on, my peice has been said, ill rant no more.
The gpu can dynamically alter clocks to meet a steady fps solution, allowing reduced power usage in real terms.
It is not the same as an idle state gpu. When the 3D rendering initiates on a standard gpu it kicks in to the steady clock rate (i.e. 772 for gtx 580 or 925 for 7970.) and it doesn't budge from that during the 3D session.
Or using media on the PC, the GF110 clocks at about 405MHz.
This to me sounds like a good thing. A variable clocks domain that allows steady fps (perhaps software/hardware detects optimum rates and adjusts clocks to meet).
I would happily buy a card that dumps a steady 60fps on my screen (or 120 for 3D purposes). I dont need my GTX 580 at 832 giving me 300fps. I'd rather have a steady 60fps and core clocks of 405 (or whatever) reducing power usage.
Let's wait for Monday 12th to see (rumoured NDA lift).
In theory it been done, but I can’t think of an GPU example where such an implantation has been "up clocking" so quickly or dramatically, but implications when enthralled in split second heat of battle are going to be grueling.
Will Nvidia have such profile as firmware (hard wired) or more of a driver software type program, we wait for release day.
Now my only concern to this feature is what type of micro stutter will occur during SLI? I mean, how fast is this to respond when you have two or three cards in SLI. Anywas, I am sure its a crazy fast card and most of our concerns will be washed away once the NDA lifts.
This turbo thing could be ok for guys who run stock GPU speeds and are too scared to overclock but most enthusiasts would prefer the GPU at max clocks when they're gaming.
I guess we'll see how it turns out....
Rumored 10%