Thursday, May 17th 2012

NVIDIA Readies GK104-based GeForce GTX 680M for Computex

NVIDIA is readying a high-performance mobile GPU for a Computex 2012 unveiling. Called the GeForce GTX 680M, the chip is based on its trusty 28 nm GK104 silicon, but with about half its streaming multiprocessors disabled, resulting in a CUDA core count of around 768. Reference MXM boards of the chip could ship with memory options as high as 4 GB, across a 256-bit wide memory interface. With the right craftsmanship on the part of NVIDIA, the GTX 680M could end up with a power draw of 100W. A Chinese source had the opportunity to picture the reference board qualification sample, and put it through 3DMark 11, in which it was found to be roughly 37% faster than the GF114-based GeForce GTX 670M, scoring 4905 points in Performance preset. The test bed was driven by Intel Core i7-3720QM quad-core mobile processor.
Source: VideoCardz
Add your own comment

54 Comments on NVIDIA Readies GK104-based GeForce GTX 680M for Computex

#1
HalfAHertz
Omg they butchered it! :o Why not just make a new sku instead?
Posted on Reply
#2
sc
Up to 4GB for the mobile version and they gave us 690 with 2GB / core?
Posted on Reply
#3
SnapS4
the 3d mark11 score is low,wait for more test
Posted on Reply
#4
KooKKiK
poor 3dmark score


7970M gonna eat this ALIVE !!!
Posted on Reply
#6
Over_Lord
News Editor
HD7970M indeed will eat this up for breakfast.
Posted on Reply
#7
Jojo Kracko
vs 670M?!? They couldn't even pit it against the 675M (rebranded 580M)? Is it even as fast as the old 580M?

Disappointing if accurate.


LOL at 4GB for a crap notebook card. About as useful as 16GB or 32GB in a notebook.

Just a waste of the buyer's money to get a 'checkbox' on the box. Marketing BS. Put more money into a better screen you twits! Why the F are we all still waiting for IPS displays on our PC notebooks? Outrageous!
Posted on Reply
#8
THE_EGG
Jojo Krackovs 670M?!? They couldn't even pit it against the 675M (rebranded 580M)? Is it even as fast as the old 580M?

Disappointing if accurate.


LOL at 4GB for a crap notebook card. About as useful as 16GB or 32GB in a notebook.

Just a waste of the buyer's money to get a 'checkbox' on the box. Marketing BS. Put more money into a better screen you twits! Why the F are we all still waiting for IPS displays on our PC notebooks? Outrageous!
:confused: I have an IPS Display on my Lenovo X201T? And it isn't even a gaming laptop lol.
Posted on Reply
#9
Benetanegia
KooKKiKpoor 3dmark score


7970M gonna eat this ALIVE !!!
thunderisingHD7970M indeed will eat this up for breakfast.
Let me seriously doubt it. Will it end faster? Posibly, but not that much faster.

7970M is based on Pitcairn, 1280 SPs and clocked at 850 Mhz. Desktop HD7870 scores a little over 6500 points at it's 1 Ghz default clock. Normalizing for 850 Mhz that's 5525 and it would probably be even less when paired up with mobile CPU which is weaker than a desktop setup. So 10% faster, yes. Eating it alive, nope.

BUT in general yeah it does not look like the best thing they could make and disabling half the chip does not sound like a reasonable thing to do.

We have to take this with a grain of salt really since the source mentions 768 SPs, but their first choice is 744 SPs (they say literally "GeForce GTX 680M has only 744 CUDA cores (but some listings suggest it has 768 cores – this still needs to be verified)"), which is the card that has suposedly been tested and 744 shaders is completely imposible with this chip. So prepare that truckload of salt.
HalfAHertzOmg they butchered it! :o Why not just make a new sku instead?
I agree.

PS: Many recent events are seriously making me doubt the mere existence of GK106 TBH. GK104 was certainly suposed to be the mid-range chip, considering GK100/110 is a 7 billion manstrosity and twice as big basically, GK104 was not even a performance part such as GF114 which was 3/4 of a GF110, it was a natural mid-range (half of GK110) part in Nvidia's original lineup. And now all of this has me thinking that maybe there was no such thing as GK106, considering this and the fact that the GTX660 will also be based on GK104 and the GTX 650 is apparently based on GK107.
Posted on Reply
#10
SIGSEGV
Completely Bonkers100W. Not in my laptop, thank you!
Agreed :laugh:

OMG, 100Watts, i dont believe it.. :banghead: :respect: :roll:
Posted on Reply
#11
KooKKiK
BenetanegiaLet me seriously doubt it. Will it end faster? Posibly, but not that much faster.

7970M is based on Pitcairn, 1280 SPs and clocked at 850 Mhz. Desktop HD7870 scores a little over 6500 points at it's 1 Ghz default clock. Normalizing for 850 Mhz that's 5525 and it would probably be even less when paired up with mobile CPU which is weaker than a desktop setup. So 10% faster, yes. Eating it alive, nope.

BUT in general yeah it does not look like the best thing they could make and disabling half the chip does not sound like a reasonable thing to do.

We have to take this with a grain of salt really since the source mentions 768 SPs, but their first choice is 744 SPs (they say literally "GeForce GTX 680M has only 744 CUDA cores (but some listings suggest it has 768 cores – this still needs to be verified)"), which is the card that has suposedly been tested and 744 shaders is completely imposible with this chip. So prepare that truckload of salt.
forum.notebookreview.com/gaming-software-graphics-cards/658185-amd-7970m-vs-gtx-680m-2.html#post8460234



:laugh: :laugh: :laugh:




ps. dont forget 3dmark11 at low res. prefer Kepler than AMD stuff. ;)
Posted on Reply
#12
Benetanegia
KooKKiKforum.notebookreview.com/gaming-software-graphics-cards/658185-amd-7970m-vs-gtx-680m-2.html#post8460234

i410.photobucket.com/albums/pp186/powwow71/Capture.jpg

:laugh: :laugh: :laugh:
Yeah, let's keep the laughs for when we see official reviews shall we? The HD7850 does 5500 points according to official reviews. If you really want to make me believe the HD7970M with slightly lower clocks does 600+ more points and only 400 less than the desktop HD7870, you'll need to show me much more than a user posted result.

Same goes for the 680M results. Kepler was designed for using GPU Boost. It was designed for it, it depends on it. Lack of proper driver support for a part that does not yet officially exist and preformance will be a lot lower. Plus the 744 Sp thing. Salt please.
Posted on Reply
#13
KooKKiK
not to believe from user review. thats ok for me.

but...
BenetanegiaLet me seriously doubt it. Will it end faster? Posibly, but not that much faster.

7970M is based on Pitcairn, 1280 SPs and clocked at 850 Mhz. Desktop HD7870 scores a little over 6500 points at it's 1 Ghz default clock. Normalizing for 850 Mhz that's 5525 and it would probably be even less when paired up with mobile CPU which is weaker than a desktop setup. So 10% faster, yes. Eating it alive, nope.
where did you get that 5500 score from ??? an official review ??? HA-HA-HA :laugh:


and yeah, WE WILL SEE ;)
Posted on Reply
#14
bencrutz
BenetanegiaYeah, let's keep the laughs for when we see official reviews shall we? The HD7850 does 5500 points according to official reviews. If you really want to make me believe the HD7970M with slightly lower clocks does 600+ more points and only 400 less than the desktop HD7870, you'll need to show me much more than a user posted result.

Same goes for the 680M results. Kepler was designed for using GPU Boost. It was designed for it, it depends on it. Lack of proper driver support for a part that does not yet officially exist and preformance will be a lot lower. Plus the 744 Sp thing. Salt please.
hw bout P6951?

3dmark.com/3dm11/3443482
Posted on Reply
#15
Benetanegia
KooKKiKnot to believe from user review. thats ok for me.

but...



where did you get that 5500 score from ??? an official review ??? HA-HA-HA :laugh:


and yeah, WE WILL SEE ;)
I don't know what you do not understand about normalizing the results... and I never made any claim, you did. Ask yourself who's making the statement about what's going to happen and who's asking for caution and proofs. ;)

You are making claims based on a source that says the card has 744 SPs. How dumb can that be? (to say it has 744 SPs) And how can you expect any resemblance of fact based on that?

The internet is full of fakes so one post by a random poster is not something I will base anything on. I don't know now about 3dmark 11 and other recent benchmarks, because honestly I don't care about them anymore, but you could easily make your results much better by using the performance profile instead of the quality one that comes by default, or enabling every single optimization available on the control panel, when reviews are almost invariably made with default (quality) settings. And that's just one of the dozens of example of how results can vary and how to trick/fake results. Personally I always inmediately enable the highest posible settings and as such I always usually have 10-20% lower fps than reviews.

So without knowing anything really and with so many unknown variables a claim such as yours is dumb really. And you insist on defending that claim which you cannot prove and you will continue doing it in a reply, I'm sure. And that's even dumber, but that's the internet I guess. :laugh:
bencrutzhw bout P6951?

3dmark.com/3dm11/3443482
And what about that? That I find it very very unlikely and posibly fake, considering that not even the desktop HD7870 is so fast.
Posted on Reply
#16
SIGSEGV
BenetanegiaAnd what about that? That I find it very very unlikely and posibly fake, considering that not even the desktop HD7870 is so fast.
LMAO
yeah that's possibly fake :laugh: :respect:

















:wtf:
Posted on Reply
#17
Benetanegia
SIGSEGVLMAO
yeah that's possibly fake :laugh: :respect:

:wtf:
Aaah now I understand. You are a troll and I'm the target. Cool. How flattering. ;)
Posted on Reply
#18
SIGSEGV
BenetanegiaAaah now I understand. You are a troll and I'm the target. Cool. How flattering. ;)
meh.. :laugh:
i dont want to be much OOT on this thread, since this thread is about GTX680M.

i hope those GTX680M benchmark and their power draw which were already showed in this thread is fake, but if that becomes real then 7970M will win for sure :)
Posted on Reply
#19
KooKKiK
BenetanegiaI don't know what you do not understand about normalizing the results... and I never made any claim, you did. Ask yourself who's making the statement about what's going to happen and who's asking for caution and proofs. ;)

You are making claims based on a source that says the card has 744 SPs. How dumb can that be? (to say it has 744 SPs) And how can you expect any resemblance of fact based on that?
WTF !!! :wtf:

i didnt say a word about that damn SPs.

all i concern is about 3dmark score, and that look 'real' for me.


who care about SPs you dumbASS :laugh:



ps. and you normalized a underclock 7870 to 7850 level ( with such a detailed score of 5525 :wtf: )

oh yeah, thats good for you :laugh:
Posted on Reply
#20
DarkOCean
Cutting the sp count in half seems a little drastic where's gk 106?
Posted on Reply
#21
Benetanegia
KooKKiKwho care about SPs you dumbASS :laugh:
Excuse me? If they can't get the number of SP right how can you take anything else seriously? lol ignorance is bliss? How does it look from there? Kepler is made of SMX which have 192 shaders. You try dividing 744 by 192. :shadedshu
ps. and you normalized a underclock 7870 to 7850 level ( with such a detailed score of 5525 :wtf: )
Wtf underclock. The HD7970M specs say it's a Pitcairn clocked at 850 Mhz. So I normalized 1000 Mhz scores to 850 Mhz and 5525 is the result. Of course if you are unable to do the most simple math...

3dmark.com/3dm11/3335189

P5746 - Oh yeah I was so far off with my normalization...
Posted on Reply
#22
KooKKiK
BenetanegiaExcuse me? If they can't get the number of SP right how can you take anything else seriously? lol ignorance is bliss? How does it look from there? Kepler is made of SMX which have 192 shaders. You try dividing 744 by 192. :shadedshu



Wtf underclock. The HD7970M specs say it's a Pitcairn clocked at 850 Mhz. So I normalized 1000 Mhz scores to 850 Mhz and 5525 is the result. Of course if you are unable to do the most simple math...

3dmark.com/3dm11/3335189

P5746 - Oh yeah I was so far off with my normalization...
they had a photograph of 3dmark screen.

maybe they had a card but didnt know what spec it is, who knows ???

but that screen look very real for me, more real than BS spec.



and your way to normalized is to multiply 6500 point of 7870 by 85 percent ??!!??

oh boy, cant stop laughing :roll: :roll: :roll:
Posted on Reply
#23
ichime
BenetanegiaAnd what about that? That I find it very very unlikely and posibly fake, considering that not even the desktop HD7870 is so fast.
Wrong. It's an overclocked 7970M owned by this guy:

forum.notebookreview.com/alienware-m17x/561350-official-alienware-m17x-benchmark-thread-part-4-a-296.html

(it's overvolted too).

Anyways regarding this GTX 680M leak, these screenshots and info were actually uncovered before the 7970M specs were leaked and engineering/quality samples of it became available. Knowing nVidia's history with these things, it's likely that nVidia actually redid the GTX 680M from that point so that this revamped version can at least match the 7970M performance wise.
Posted on Reply
#24
Benetanegia
KooKKiKand your way to normalized is to multiply 6500 point of 7870 by 85 percent ??!!??

oh boy, cant stop laughing :roll: :roll: :roll:
You can laugh all you want. Stock HD7970M does ~5700 as shown by the link I provided. So I was not far off.

HD7850 does around 5500 too and it's clocked close, 860 Mhz.

So laugh, laugh all you want, you only look like an idiot.

PS: People who know me for a long time know my track record of nailing the performance of upcoming cards based on specs and I don't use methods much more complicated than that. Only difference is I adjust based on where I think or know there will be a bottleneck and some other things. And if you think it's not as simple as that, maybe you should think about taking some computer science classes.

EDIT: And/or read below.
ichimeWrong. It's an overclocked 7970M owned by this guy:
Yeah I didn't notice that when I replied, then after that I moved on to other things. 1035 Mhz... hmm

P5746 @ 850 Mhz (stock)
P6951 @ 1035 Mhz

Let's do an stupid thing, let's normalize the 6951 result to stock clocks. You know that soooo stupid and laughable thing...

6951 * 850/1035 = 5708 what? wait... but no because that would be so stupid. To think that...
Posted on Reply
#25
blibba
I find it unlikely that they'd use half a GK104 clocked so as to use 100W. That's no more efficient than the desktop part.

With Fermi, they got fully enabled GF114 down to this power envelope, and GF114 and GK104 have fairly similar power usage in their desktop versions.

I was expecting a 1536-core part at 600MHZ.
Posted on Reply
Add your own comment
Dec 26th, 2024 23:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts