Thursday, May 17th 2012
NVIDIA Readies GK104-based GeForce GTX 680M for Computex
NVIDIA is readying a high-performance mobile GPU for a Computex 2012 unveiling. Called the GeForce GTX 680M, the chip is based on its trusty 28 nm GK104 silicon, but with about half its streaming multiprocessors disabled, resulting in a CUDA core count of around 768. Reference MXM boards of the chip could ship with memory options as high as 4 GB, across a 256-bit wide memory interface. With the right craftsmanship on the part of NVIDIA, the GTX 680M could end up with a power draw of 100W. A Chinese source had the opportunity to picture the reference board qualification sample, and put it through 3DMark 11, in which it was found to be roughly 37% faster than the GF114-based GeForce GTX 670M, scoring 4905 points in Performance preset. The test bed was driven by Intel Core i7-3720QM quad-core mobile processor.
Source:
VideoCardz
54 Comments on NVIDIA Readies GK104-based GeForce GTX 680M for Computex
7970M gonna eat this ALIVE !!!
Disappointing if accurate.
LOL at 4GB for a crap notebook card. About as useful as 16GB or 32GB in a notebook.
Just a waste of the buyer's money to get a 'checkbox' on the box. Marketing BS. Put more money into a better screen you twits! Why the F are we all still waiting for IPS displays on our PC notebooks? Outrageous!
7970M is based on Pitcairn, 1280 SPs and clocked at 850 Mhz. Desktop HD7870 scores a little over 6500 points at it's 1 Ghz default clock. Normalizing for 850 Mhz that's 5525 and it would probably be even less when paired up with mobile CPU which is weaker than a desktop setup. So 10% faster, yes. Eating it alive, nope.
BUT in general yeah it does not look like the best thing they could make and disabling half the chip does not sound like a reasonable thing to do.
We have to take this with a grain of salt really since the source mentions 768 SPs, but their first choice is 744 SPs (they say literally "GeForce GTX 680M has only 744 CUDA cores (but some listings suggest it has 768 cores – this still needs to be verified)"), which is the card that has suposedly been tested and 744 shaders is completely imposible with this chip. So prepare that truckload of salt. I agree.
PS: Many recent events are seriously making me doubt the mere existence of GK106 TBH. GK104 was certainly suposed to be the mid-range chip, considering GK100/110 is a 7 billion manstrosity and twice as big basically, GK104 was not even a performance part such as GF114 which was 3/4 of a GF110, it was a natural mid-range (half of GK110) part in Nvidia's original lineup. And now all of this has me thinking that maybe there was no such thing as GK106, considering this and the fact that the GTX660 will also be based on GK104 and the GTX 650 is apparently based on GK107.
OMG, 100Watts, i dont believe it.. :banghead: :respect: :roll:
:laugh: :laugh: :laugh:
ps. dont forget 3dmark11 at low res. prefer Kepler than AMD stuff. ;)
Same goes for the 680M results. Kepler was designed for using GPU Boost. It was designed for it, it depends on it. Lack of proper driver support for a part that does not yet officially exist and preformance will be a lot lower. Plus the 744 Sp thing. Salt please.
but... where did you get that 5500 score from ??? an official review ??? HA-HA-HA :laugh:
and yeah, WE WILL SEE ;)
3dmark.com/3dm11/3443482
You are making claims based on a source that says the card has 744 SPs. How dumb can that be? (to say it has 744 SPs) And how can you expect any resemblance of fact based on that?
The internet is full of fakes so one post by a random poster is not something I will base anything on. I don't know now about 3dmark 11 and other recent benchmarks, because honestly I don't care about them anymore, but you could easily make your results much better by using the performance profile instead of the quality one that comes by default, or enabling every single optimization available on the control panel, when reviews are almost invariably made with default (quality) settings. And that's just one of the dozens of example of how results can vary and how to trick/fake results. Personally I always inmediately enable the highest posible settings and as such I always usually have 10-20% lower fps than reviews.
So without knowing anything really and with so many unknown variables a claim such as yours is dumb really. And you insist on defending that claim which you cannot prove and you will continue doing it in a reply, I'm sure. And that's even dumber, but that's the internet I guess. :laugh: And what about that? That I find it very very unlikely and posibly fake, considering that not even the desktop HD7870 is so fast.
yeah that's possibly fake :laugh: :respect:
:wtf:
i dont want to be much OOT on this thread, since this thread is about GTX680M.
i hope those GTX680M benchmark and their power draw which were already showed in this thread is fake, but if that becomes real then 7970M will win for sure :)
i didnt say a word about that damn SPs.
all i concern is about 3dmark score, and that look 'real' for me.
who care about SPs you dumbASS :laugh:
ps. and you normalized a underclock 7870 to 7850 level ( with such a detailed score of 5525 :wtf: )
oh yeah, thats good for you :laugh:
3dmark.com/3dm11/3335189
P5746 - Oh yeah I was so far off with my normalization...
maybe they had a card but didnt know what spec it is, who knows ???
but that screen look very real for me, more real than BS spec.
and your way to normalized is to multiply 6500 point of 7870 by 85 percent ??!!??
oh boy, cant stop laughing :roll: :roll: :roll:
forum.notebookreview.com/alienware-m17x/561350-official-alienware-m17x-benchmark-thread-part-4-a-296.html
(it's overvolted too).
Anyways regarding this GTX 680M leak, these screenshots and info were actually uncovered before the 7970M specs were leaked and engineering/quality samples of it became available. Knowing nVidia's history with these things, it's likely that nVidia actually redid the GTX 680M from that point so that this revamped version can at least match the 7970M performance wise.
HD7850 does around 5500 too and it's clocked close, 860 Mhz.
So laugh, laugh all you want, you only look like an idiot.
PS: People who know me for a long time know my track record of nailing the performance of upcoming cards based on specs and I don't use methods much more complicated than that. Only difference is I adjust based on where I think or know there will be a bottleneck and some other things. And if you think it's not as simple as that, maybe you should think about taking some computer science classes.
EDIT: And/or read below. Yeah I didn't notice that when I replied, then after that I moved on to other things. 1035 Mhz... hmm
P5746 @ 850 Mhz (stock)
P6951 @ 1035 Mhz
Let's do an stupid thing, let's normalize the 6951 result to stock clocks. You know that soooo stupid and laughable thing...
6951 * 850/1035 = 5708 what? wait... but no because that would be so stupid. To think that...
With Fermi, they got fully enabled GF114 down to this power envelope, and GF114 and GK104 have fairly similar power usage in their desktop versions.
I was expecting a 1536-core part at 600MHZ.