# NVIDIA GeForce Kepler Packs Radically Different Number Crunching Machinery



## btarunr (Feb 10, 2012)

NVIDIA is bound to kickstart its competitive graphics processor lineup to AMD's Southern Islands Radeon HD 7000 series with GeForce Kepler 104 (GK104). We are learning through reliable sources that NVIDIA will implement a radically different design (by NVIDIA's standards anyway) for its CUDA core machinery, while retaining the basic hierarchy of components in its GPU similar to Fermi. The new design would ensure greater parallelism. The latest version of GK104's specifications looks like this: 

*SIMD Hierarchy*
4 Graphics Processing Clusters (GPC)
4 Streaming Multiprocessors (SM) per GPC = 16 SM
96 Stream Processors (SP) per SM = 1536 CUDA cores


*TMU / Geometry Domain* 
8 Texture Units (TMU) per SM = 128 TMUs
32 Raster OPeration Units (ROPs)
*Memory* 
256-bit wide GDDR5 memory interface
2048 MB (2 GB) memory amount standard
*Clocks/Other* 
950 MHz core/CUDA core (no hot-clocks)
1250 MHz actual (5.00 GHz effective) memory, 160 GB/s memory bandwidth
2.9 TFLOP/s single-precision floating point compute power
486 GFLOP/s double-precision floating point compute power
Estimated die-area 340mm²

*View at TechPowerUp Main Site*


----------



## phanbuey (Feb 10, 2012)

wow... that is definitely different...


----------



## LiveOrDie (Feb 10, 2012)

I bet your mommy always told you to eat your greens


----------



## ViperXTR (Feb 10, 2012)

its looking like an AMD specification now hehe (wait 32 ROPs? D: )


----------



## puma99dk| (Feb 10, 2012)

i just hope they a serious about that 2048mb of memory if not it will be a shame.


----------



## EpicShweetness (Feb 10, 2012)

These specs are defiantly strange for an Nvidia chip. 1536 CUDA Cores is triple that of the GTX 580, yet with only a 30% reduction in the size of the fabrication as well the fact that GK104 is smaller then GF110. This only indicates a few things, a "nerf" on the CUDA core itself, or the architecture is much more "cluster based". Very Interesting I'll be following this closely


----------



## LAN_deRf_HA (Feb 10, 2012)

It's a lot more shaders but they're running much slower too. Seems it'd even out on the heat front.


----------



## ViperXTR (Feb 10, 2012)

just like what the HD 2000 and the present 7000 cards are doing, moar shaders but lower clocks (or rather clocks are tied with the TMU/ROP clocks)


----------



## radrok (Feb 10, 2012)

My massive loop is waiting for the heat


----------



## hardcore_gamer (Feb 10, 2012)

Die size is very close to that of 7970 (365mm2). Interesting


----------



## radarblade (Feb 10, 2012)

Seems like Nvidia's pretty prepped up to wipe AMD off the slate! But what would be the TDP on these things? Preferably lesser than the earlier 480 and 580 heaters.


----------



## TheoneandonlyMrK (Feb 10, 2012)

Interested In how this is going to be 50% faster then a7970 they seem similar I'm shader layout


----------



## NC37 (Feb 10, 2012)

The end of NV's monolithic GPU era is at hand...was about to say...Bout freaken time! ATI was slower at first when they switched but I knew eventually NV would have to change too.

Very interested to see how well NV does at ATI's own game.


----------



## gaximodo (Feb 10, 2012)

this isn't supposed to be NV's flagship anywayz.


----------



## Xaser04 (Feb 10, 2012)

gaximodo said:


> this isn't supposed to be NV's flagship anywayz.



GK104 so GTX560Ti replacement (ish). 

Considering this is 1536 shaders it would be logical to assume that the full fat model would have 2048 shaders, after all the GTX560TI was - in simplistic terms - roughly 75% of a GTX580. 

The shader count itself is very interesting. 

The increase in shaders (384-1536 if we assume a GTX560TI replacement) would suggest that each Kepler shader is less complex than its Fermi contemporary. 

If we also assume similar performance to the HD7950 (doesn't seem to unrealistic) then clock for clock GCN and Kepler could be quite evenly matched (HD7950 has more shaders but a lower core clock).

Should be very interesting.


----------



## Crap Daddy (Feb 10, 2012)

theoneandonlymrk said:


> Interested In how this is going to be 50% faster then a7970 they seem similar I'm shader layout



This is not going to be 50% faster than 7970. Judging by the specs it should fall between 7950 and 7970 at a rumored 300$.
GK110 will probably be the Tahiti killer. At a price...


----------



## Red_Machine (Feb 10, 2012)

At this rate, I will feel compelled to replace my 580.  GK110 will likely be 70-80% faster...


----------



## pantherx12 (Feb 10, 2012)

Red_Machine said:


> At this rate, I will feel compelled to replace my 580.  GK110 will likely be 70-80% faster...



I reckon it will be half that, at best.


----------



## Benetanegia (Feb 10, 2012)

I assume this specs have been judged legit since Btarunr did post them unlike most others.

Ah crap they are too different, imposible to guesstimate the performance based on them (don't know how other people are so sure). I'll try to make my analysis anyway.

At a first glance it looks like they doubled GF104's shader domain (128 TMU, 4 GPCs, etc.) and then doubled the shader amount per SM because abandoning hot clocks allows for that. Performance wise the end result should be similar.

Based on die size this chip must contain twice the amount of transistors on GF104, while retaining the 256 bit bus, so there's no compelling reason to assume the shaders are any less capable than they were in Fermi. They could have just as easily gone with 768 SPs and hot-clocks within the same die size.

And finally efficiency. That's the key to knowing the performance. We don't know how well they will be able to use all those SP. I'd assume they are using 6x16 SP wide superscalar shader multiprocessors, but with how many schedulers? GF104 had 2. So now they have 4? Or since shaders run at half the speed the schedulers are just issuing the same amount of ops-per-cycle? (in reality cycles-per-op)

So many questions but I had fun. Based on raw specs this chip has the potential to rape any other card on the market, think 2x GTX560 Ti, at least at 1080/1200p. But efficiency/scaling is the key factor and that's completely unknown to us.

EDIT: As you can see, I changed my mind competely as I was writing this post. I first thought they were very different and came to realizing that they are pretty much the same. If you think about Fermi based GF104/114 as a 768 SP chip with no hot-clocks, they just doubled the amount of GPCs.


----------



## Filiprino (Feb 10, 2012)

NVIDIA seems that has come with something very similar to GCN from AMD. But after all it's NVIDIA and the successor to Fermi, so we'll have to wait and see performance numbers.


----------



## General Lee (Feb 10, 2012)

I wouldn't take them without a big grain of salt, but it's always fun to do some what iffing.

The specs look similar to what AMD has now, so given the estimated die size and unit counts, I'd say it would reach 580/7950 level performance. I doubt they'll price it at 300$ if 7950 is at 470$. More likely it's at best 50$ cheaper, that's enought to get the ball rolling. It's not really difficult to undercut the 7900 series in price, so regardless of performance it shouldn't be hard for Nvidia to claim a perf/$ crown simply because 7900 is sold at a premium currently. Of course AMD should respond to that, and I think this is the scenario we all hope for.


----------



## xenocide (Feb 10, 2012)

General Lee said:


> I wouldn't take them without a big grain of salt, but it's always fun to do some what iffing.
> 
> The specs look similar to what AMD has now, so given the estimated die size and unit counts, I'd say it would reach 580/7950 level performance. I doubt they'll price it at 300$ if 7950 is at 470$. More likely it's at best 50$ cheaper, that's enought to get the ball rolling. It's not really difficult to undercut the 7900 series in price, so regardless of performance it shouldn't be hard for Nvidia to claim a perf/$ crown simply because 7900 is sold at a premium currently. Of course AMD should respond to that, and I think this is the scenario we all hope for.



A lot of people are holding out for Nvidia just to see prices level out.  If they sell a card on par for the 7950 $100 cheaper, they'll make up the difference in volume.  I guarantee they would sell twice as many cards as if they priced it around $450.


----------



## jamsbong (Feb 10, 2012)

Confirmed Nvidia is doing an ATI!
The specs look so identical that if I rename these specs as say.... 

HD7870:
256bit GDDR5 2GB memory
1536 CU, 128TMU, 32ROP, small 340mm^2 die size, no hot clocks.

It looks totally believable! Has Nvidia been hiring lots of ATI engineers? or they reversed engineered ATI's Cayman?

Jokes aside, some rational observations:
The specs itself looks like a mid-high end card, will be very competitive price wise as it uses 256bit memory and small die. I won't be surprise that it is only faster than cayman by 10-20%. It will be on par with GTX580 at best.
I believe Nvidia is working on a high end card which has yet to show itself.


----------



## Crap Daddy (Feb 10, 2012)

Charlie seems to be very into Kepler these days. He says the ball is rolling :

"Reports coming in from the far east say that those high up in the priority list started getting Kepler cards in various guises early this week, possibly late last. The number of sightings from sources that SemiAccurate trusts has been going up almost exponentially over the past few days, and will probably keep doing so for a bit."

He concludes:

"If things go as normal, it takes 4-6 weeks from AIB sampling to cards on the shelves. This would mean late March or early April, just like we have been saying for weeks."


----------



## arnoo1 (Feb 10, 2012)

seriously 1536 shaders? thats 3 x times more than fermi


----------



## 1c3d0g (Feb 10, 2012)

I have a _feeling_ that NVIDIA will kill the competition this time around...Kepler sounds like a new Voodoo2, if y'all still remember that...


----------



## TheMailMan78 (Feb 10, 2012)

This is odd. I go Nvidia and Nvidia starts to look like AMD. lol I can't win.


Listen if NVIDIA fails with the 700 series I take full responsibility. Its my fault for going green.


----------



## Benetanegia (Feb 10, 2012)

arnoo1 said:


> seriously 1536 shaders? thats 3 x times more than fermi



Not really because they dropped hot-clocks. From the persective of how many ops/cycle the chip can do GF100/110 could be seen as a 1024 SP part, and GF104/114 as a 768 SP part. So it's a 50% improvement over GF100 and 100% over GF104. 

@ thread

What I mention to arnoo1 is normal and has happened on pretty much every generation. The "failure" at releasing fully enabled chips in the GTX400 line made look as if it didn't happen, at least performance wise since Fermi CUDA cores are not as fast or efficient (clock for clock) as those on previous Nvidia cards. But in the end if you look at the GTX580 it's pretty damn close to being 100% faster than GTX280/285. And GTX 560 Ti is close to 50% faster. This is what Nvidia tried with GF100 and GF104, but only ultimately achieved with GF110 and GF114.

Look here, i couldn't find a direct comparison since W1zz stopped benching DX10 cards:










On the left GTX 460 is similar to GTX285. On the right GTX580 is almost twice as fast as the GTX 460.

I don't know why people (all over the internet) are so reluctant to believe a similar thing could happen this time around. Only this time they won't have to disable parts in the first place. It's not a crazy thought at all. At least IMO.


----------



## m1dg3t (Feb 10, 2012)

Info is starting to get better, still waiting  Price war's should be as fun as waiting for release, hopefully something fit's my buget so i can upgrade 

Has Nvidia done a "If we can't beat 'em, join 'em" thing?


----------



## Benetanegia (Feb 10, 2012)

jamsbong said:


> Confirmed Nvidia is doing an ATI!





TheMailMan78 said:


> This is odd. I go Nvidia and Nvidia starts to look like AMD. lol I can't win.





m1dg3t said:


> Has Nvidia done a "If we can't beat 'em, join 'em" thing?



It's kind of an irrelevant point to discuss, but why do so many people say something like this? I just can't make any sense of it.

AMD

- Gone with scalar shaders (which Nvidia has been doing for 6+ years)
- Gone modular with CU (which Nvidia has been doing since Fermi, 2 years now)
- GPGPU friendly architecture and caches (Fermi)

Nvidia

- Dropped hot-clocks

And Nvidia is doing what AMD? Come on, they dropped hot-clocks that's it, arguably because slower cores (yet smaller and in 2x amount) are more area/wattage efficient in 28nm, which did not necessarily apply to 40 nm, 65nm, 55nm...

The only interesting thing is that both GPU vendors have converged in a very similar architecture now that both pursue the same goals and are contrained by the same physical limits.

EDIT: ^^ And that's why I love tech BTW and specially GPUs. It's pure engineering. Solving an specific "problem" (rendering) in the best way they can, and looking 2 different vendors solving it so differently, but with so similar results has been very fun to watch, maybe in the coming years it will not be as fun as they converge more and more. Kind of like CPUs are mostly equal and there's a lot less to discuss (Bulldozer was a fresh attempt tho, yet it failed). I love tech anyway.


----------



## xenocide (Feb 10, 2012)

Benetanegia said:


> It's kind of an irrelevant point to discuss, but why do so many people say something like this? I just can't make any sense of it.
> 
> AMD
> 
> ...



People just look at the number of Shader's and go "zomg copying AMD!!?#"


----------



## Benetanegia (Feb 10, 2012)

xenocide said:


> People just look at the number of Shader's and go "zomg copying AMD!!?#"



Yes I guess, but it's so obvious that Nvidia would eventually go to 4 digit numbers, if not this gen (i.e 1024 SP "Fermi"), in the next one at least. Anyway if AMD had followed using VLIW we'd probably talking about 3000 SPs, so again they wouldn't "look the same". So I stand by my point. If at all, AMD did an "Nvidia". Yet it's not true as I said.


----------



## TheMailMan78 (Feb 10, 2012)

For some reason I just think NVIDIA is gonna bring a bag of fail next round for no other reason then I bought one.


----------



## m1dg3t (Feb 10, 2012)

I'm curious why they went with a 256 bit us as opppsed to 384? I thought with the large amount's of DDR5 you want max "throughput"?  Maybe cuz it's the "budget" board? Anyway's i'm still waiting


----------



## MxPhenom 216 (Feb 10, 2012)

TheMailMan78 said:


> For some reason I just think NVIDIA is gonna bring a bag of fail next round for no other reason then I bought one.



I just hope there geometry and tesselation performance is still very good like it was with fermi.


----------



## MxPhenom 216 (Feb 10, 2012)

m1dg3t said:


> I'm curious why they went with a 256 bit us as opppsed to 384? I thought with the large amount's of DDR5 you want max "throughput"?  Maybe cuz it's the "budget" board? Anyway's i'm still waiting



well its because of the memory size. 2GB. if it was 3gb then 384bit would work.


----------



## m1dg3t (Feb 10, 2012)

nvidiaintelftw said:


> well its because of the memory size. 2GB. if it was 3gb then 384bit would work.



Wasn't Nvidia first to use 384 bit back in the day? Then it was with only 1g IIRC


----------



## MxPhenom 216 (Feb 10, 2012)

m1dg3t said:


> Wasn't Nvidia first to use 384 bit back in the day? Then it was with only 1g IIRC



no? well the 4 series at like 1280mb of ram and 1536mb(i think on the 580) and its 384bit.


----------



## creepingdeath (Feb 10, 2012)

radarblade said:


> Seems like Nvidia's pretty prepped up to wipe AMD off the slate! But what would be the TDP on these things? Preferably lesser than the earlier 480 and 580 heaters.



Uh, with these specifications it definitely will NOT beat Tahiti.   There's always price though right?    Hotclocking is gone, hence the shader units are substantially weaker than those found in Fermi.

GK110 is the one we want and since it has just taped out, it will not be released until Q3.  Sorry green fans


----------



## creepingdeath (Feb 10, 2012)

1c3d0g said:


> I have a _feeling_ that NVIDIA will kill the competition this time around...Kepler sounds like a new Voodoo2, if y'all still remember that...



I have a feeling that the specs are black and white.   GK110 will be the one to wait for and its not coming till Q3.


----------



## Benetanegia (Feb 10, 2012)

creepingdeath said:


> Uh, with these specifications it definitely will NOT beat Tahiti.   There's always price though right?    Hotclocking is gone, hence the shader units are substantially weaker than those found in Fermi.



Yes shaders are going to be exactly half as powerful as in Fermi. Hence this chip still has 50% more than the GTX580. Still not enough info to say one way or another.

But since we are at making absurd claims with no posible way to back up: this chip WILL beat Tahiti, and by a good margin too.


----------



## creepingdeath (Feb 10, 2012)

Benetanegia said:


> Yes shaders are going to be exactly half as powerful as in Fermi. Hence this chip still has 50% more than the GTX580. Still not enough info to say one way or another.
> 
> But since we are at making absurd claims with no posible way to back up: this chip WILL beat Tahiti, and by a good margin too.



I'll call you and raise you by stating, "this will be nvidia's 5000 FX all over again " 

Just kidding with that  In all seriousness, the specs are not impressive.   It may come close to the 580 at a lower cost and better efficiency, but based on specs it is not a tahiti killer.   Gotta wait for GK110 which just taped out.   Thats the one i'll wait for, i'll be doing another round of upgrades around the September timeframe anyway.


----------



## m1dg3t (Feb 10, 2012)

creepingdeath said:


> I'll call you and raise you by stating, "this will be nvidia's 5000 FX all over again "



I hope not! Those were shitty time's


----------



## Prima.Vera (Feb 10, 2012)

That's nice, but how about something similar to AMD's *MLAA* straight from the driver??? I  I play a lot of older games that don't support AA, and with MLAA is a delight.


----------



## m1dg3t (Feb 10, 2012)

Prima.Vera said:


> That's nice, but how about something similar to AMD's *MLAA* straight from the driver??? I  I play a lot of older games that don't support AA, and with MLAA is a delight.



What, what is MLAA? It's useless, ATi has no innovative feature's like that!


----------



## Steevo (Feb 10, 2012)

7970

3.79 TFLOPS Single Precision compute power
947 GFLOPS Double Precision compute power  


Twice as much math processing power with only a 25% increase in "core" count and 25Mhz less core speed?

If these are official numbers from the green camp I feel sorry for their PR department making efficiency statements.


----------



## Crap Daddy (Feb 10, 2012)

Steevo said:


> 7970
> 
> 3.79 TFLOPS Single Precision compute power
> 947 GFLOPS Double Precision compute power
> ...



Let's remember the 6970 has 2.7 TFlops while GTX580 has something like 1.5 so if we are talking about gaming benchmarks I don't think that's a factor.


----------



## blibba (Feb 10, 2012)

m1dg3t said:


> Wasn't Nvidia first to use 384 bit back in the day? Then it was with only 1g IIRC



First GPU to use a 384-bit bus was the G80, as used in the 8800GTX and 8800 Ultra. It didn't have 1GB of memory though, because it had a 384-bit bus...

The GTX550 was the first (and so far only) card to break the evenly filled memory rule. It has 1GB through a 192-bit bus.


----------



## Benetanegia (Feb 10, 2012)

creepingdeath said:


> It may come close to the 580 at a lower cost and better efficiency, but based on specs it is not a tahiti killer.   Gotta wait for GK110 which just taped out.   Thats the one i'll wait for, i'll be doing another round of upgrades around the September timeframe anyway.



Funny, because based on specs this is not only a Tahiti killer, but a Tahiti killer, raper and shitting on his tomb kind of killer, if that makes any sense. Of course that's ony based on the specs, so it' won't materialize as such.

Be honest and say that because it is 256 bit, YOU think it's not going to be faster than GTX580 or something. Because based on specs, all of them, the card has 2x the crunching power than GTX580 (2.9 vs 1.5 Gflops). Twice as much texture power (128 vs 64) and 33% more memory, just to name a few.

I wouldn't even pay too much attention to the claim that GK110 just taped out BTW. "They" say that GK100 was canned, but there's absolutely no proof of that. "They" never knew when GK104 taped out either. Plus in 2010 by this time of the year there was also a chip called GF110 in the works, and based on when it was released (October 2010), its tape out had to happen around Feb/March too. It's posible that GK100 still exists and will be released soon after GK104, which is what many rumors say. Rumors from sources that turned out to be correct about GK104 specs several months ago, if we are to believe these specs.



Steevo said:


> 7970
> 
> 3.79 TFLOPS Single Precision compute power
> 947 GFLOPS Double Precision compute power
> ...



That's double precision* only and a huge improvement over GF104, both are capped because they are the mainstream parts. GF104 was capped at 1/12 the SP amount. GK104 is 1/6, which is a nice improvement for the performance part (for example in previous generations AMD didn't even support DP on anything but high-end). The high-end chip will feature 1/2 ratio and if Tahiti's number is really true (I thought Tahiti could do 1/2 DP ), it will most definitely decimate it at DP performance.

*On SP 2.9 is definitely not half of 3.79 and like Crap Daddy said the GTX 580 had around 1.5 Gflops. Claimed theoretical GFlops means very little, except for comparing two chips using identical architecture. Obviously GK104 is not going to be 2x as fast as the GTX580 as the GFlops number suggest, or TMUs, but it will most definitely beat it by a good amount. How much? Look to previous gens and compare GTX560 Ti to GTX285. There's your most probable answer.


----------



## creepingdeath (Feb 10, 2012)

Benetanegia said:


> Funny, because based on specs this is not only a Tahiti killer, but a Tahiti killer, raper and shitting on his tomb kind of killer, if that makes any sense. Of course that's ony based on the specs, so it' won't materialize as such.
> 
> Be honest and say that because it is 256 bit, YOU think it's not going to be faster than GTX580 or something. Because based on specs, all of them, the card has 2x the crunching power than GTX580 (2.9 vs 1.5 Gflops). Twice as much texture power (128 vs 64) and 33% more memory, just to name a few.
> 
> ...



LOL

The fanboy is strong with this one.  If GK104 cures cancer and is 20x faster than 7970 great! I'll buy one. 

Unfortunately the reality is that the shader architecture of the GK104 is vastly different than that of Fermi, it takes 3 times the number of Kepler shader units to equal a Fermi Shader unit. because shader clocks will be equal to raster clocks on the Kepler.   Hotclocking is gone, that is the fallacy of your argument that you stupidly don't realize because you can't see past your fanboy eyeglasses.   Also, just so you know, Tflops is not a meaningful measure of performance.    

But hey whatever helps you sleep at night!!   Hopefully, Jen-Hsun Huang will give you a hug before you go to bed at night.


----------



## Benetanegia (Feb 10, 2012)

creepingdeath said:


> *it takes 3 times* the number of Kepler shader units to equal a Fermi Shader unit. because shader clocks will be equal to raster clocks on the Kepler.



Whaaaaaaat?? Hot clocks are 2x times the core clock, not 3x times, so I can't even start thinking why you'd think you need 3 times as many shaders. I don't even know where you are pulling that claim from but it doesn't smell any good.

You can call me fanboy, because I'm stating the facts (as if I cared), but at least *make up* an argument that doesn't sound so stupid. At least I didn't make an account just to crap on a forum with my only 4 posts.



> GK110 is the one we want and since it has just taped out, it will not be released until Q3.  Sorry green fans



Pff I don't know why I even cared to respond to you. I guess I didn't pay attention the first time. ^^ Freudian slip huh? 

Ey you got me for 3 posts, is that considered a success in Trolland? Congrats anyway.


----------



## General Lee (Feb 10, 2012)

Indeed its pointless to look at FLOPS if we don't know the efficiency of the architecture. Radeons had far higher theoretical numbers, but the efficiency was far lower than with Fermi.

Personally, 256-bit memory bus is enough to say this won't beat Tahiti, at best it will equal it, but given Nvidia's usually slower memory clocks I find it unlikely. A GTX 580 replacement is most likely IMO, and it could be really good at that. Nvidia's best cards have usually been the high midrange like 8800gt or GTX 460.


----------



## Benetanegia (Feb 10, 2012)

General Lee said:


> Indeed its pointless to look at FLOPS if we don't know the efficiency of the architecture. Radeons had far higher theoretical numbers, but the efficiency was far lower than with Fermi.



Flops are not a linearly and directly related to performance, but they are not meaningfull either. Like I said in a previous post, Nvidia abandoned hot-clocks and put 2x as many SPs as in Fermi (GF104). They could have released a GK104 that consisted in 768 SPs while still using hot-clocks, but they did what they did instead. Obviously because it's *better*, or they wouldn't have changed it in the frst place. It's safe to assume a similar efficiency, since schedulers can still issue the ops in the exact same way as in Fermi but instead of issuing twice per clock because shaders run at twice the clock they will issue to 2 different SIMDs, because there's twice as many SIMDS, that is intead of S1-S2-S3-S1-S2-S3 they will do S1-S2-...-S6.



> Personally, 256-bit memory bus is enough to say this won't beat Tahiti, at best it will equal it, but given Nvidia's usually slower memory clocks I find it unlikely. A GTX 580 replacement is most likely IMO, and it could be really good at that. Nvidia's best cards have usually been the high midrange like 8800gt or GTX 460.



Nvidia also did much better with lower BW*. Memory bandwidth is never a problem. Really how many times do we need to hear the same thing? HD5770 comes to mind. Really AMD, Nvidia, noone will ever release a card that is severely held back by memory bandwidth. I can tell you something, they would never put so many SPs and 128 TMUs only to find them severely held back by the bus.

The GTX460 was a cut down version BTW and it was cut down on purpose so that it was not close to GTX470, completely nullifying it. The full chip only came with GTX560 Ti and this one is a good >30% faster than previous generation (real gen) GTX285. Based on the specs, codename G*104 and market segment it's absolutely clear that GK104 will handily beat GTX580 (just like GF104 >>>>> GT200), at least up until 1920x1200 and will most probably beat Tahiti too.

* GTX560 Ti has 128 GB/s and HD6950 160 GB/s, that's a 25% difference and same performance.


----------



## General Lee (Feb 11, 2012)

I don't care to argue about something that's pure conjecture at this point, but if you're really expecting GK104 to have 50% more shader performance than 580, you're in for a disappointment.

There's always people who expect 2x performance increases when a new gen arrives and they eventually get disappointed when the real product comes along. If GK104 is smaller than Tahiti in die size, I find it very unlikely it'll manage to beat it in performance. IF Kepler really has a new arch it might skew things, but in the past gens Nvidia has had far bigger dies fighting AMD's smaller chips. I doubt it'll change much now. That's all I'll comment on this, since we don't even know if the news piece has a word of truth in it.


----------



## Benetanegia (Feb 11, 2012)

General Lee said:


> If GK104 is smaller than Tahiti in die size, I find it very unlikely it'll manage to beat it in performance.



Tahiti has 4.3 billion transistors. GF104 had 1.9 billion. They could have mirrored/doubled up GF104. GTX560 Ti on SLI, even with (often times bad) SLI scaling handily beats both GTX580 and HD7970. I think it's even faster than GTX590/HD6990.

It's very very clear from specs that this is 2x GF104, except memory bus. Twice as many GPCs, twice as many TMUs, twice as many SPs if you think of GF104 as a 768 SP part running at core clock... You may think unlikely to beat it, I think it's a given. It's not about hopes and dissapointment, if anyone really believes that Nvidia will release a chip with 100% more transistors than GF104 and 50% more transistors than GF110 without easily beating it... they are fucking crazy man. That'd mean 50% of transistors going down the drain or 100% more trannies failing to improve performance by a mere 30%. That is not gonna happen I tell you.



> IF Kepler really has a new arch it might skew things, but in the past gens Nvidia has had far bigger dies fighting AMD's smaller chips. I doubt it'll change much now.



In the past Nvidia was using a lot of space for compute*. AMD didn't. Now AMD does with GCN and AMD has a far bigger die, as in twice as big, Tahiti, being only 30% faster than it's predecessor. AMD's gaming efficiency went down dramatically and that's a fact that anyone can see. IF Nvidia's efficiency went up a little bit, that's all they need for an easy win.

*Yet, based on number of transistors and performance Cayman and GF110 were actually very close in efficiency.


----------



## jamsbong (Feb 11, 2012)

First of all, the name GK104, 256bit memory and small 340mm2 die size all indicate that it will be a mid-high end card. Nvidia should have something better, when I don't know but I'm sure it won't be far away.
Realistic expectation is that it will be faster than Cayman and possibly on par with GTX580. 

The CU from 8800 to Fermi were design to crunch numbers efficiently but at the cost of large die areas and power comsumption. Which is why Nvidia chip is always beastly large and consumes a lot of power. I suspect the motivation to switch to an ATI like CU is motivated by cost reduction (by reducing die size) and better TFLOP/watt rating.


----------



## Benetanegia (Feb 11, 2012)

jamsbong said:


> First of all, the name GK104, 256bit memory and small 340mm2 die size all indicate that it will be a mid-high end card. Nvidia should have something better, when I don't know but I'm sure it won't be far away.
> Realistic expectation is that it will be faster than Cayman and possibly on par with GTX580.



Remember this is not mid-range as in before Fermi where they had 3 chips, high end, mid-range (1/2 the high end) and low end (1/4). For Fermi they introduced the performance part, which is 3/4 of the high-end (AMD did the same with Barts and now again with Pitcairn). GK104 is such part. In Fermi such part was GF104 and it is around 40% faster than GTX285, while GF110 is 80% faster than GTX285.

Nvidia has always aimed at 2x the speed as previous gen, which is noted by the double up in SPs, TMUs, etc. Depending on the success that trying to double up performance has yielded a 60-80% increase in performance gen to gen. It's really safe to assume then a similar increase this time around. So let's say the high end Kepler is *only* 50% faster (low end of the spectrum), that means that if GTX580 produces 100 fps, GK100/110 (whatever) would produce 150 and GK104 by being 3/4 of the high-end chip would produce 112 fps. 12% over GTX580, pretty damn close to Tahiti.

This is for the low end of the spectrum. Do the calc if just like GTX285->GTX580 Nvidia did a 80% increase again.


----------



## LAN_deRf_HA (Feb 11, 2012)

How big of a difference does this power of 2 stuff really make? Like if the 7970 a 512 bus and much slower ram to match the same bandwidth it has now would it actually perform better?


----------



## Benetanegia (Feb 11, 2012)

LAN_deRf_HA said:


> How big of a difference does this power of 2 stuff really make? Like if the 7970 a 512 bus and much slower ram to match the same bandwidth it has now would it actually perform better?



It makes no difference really and cards are still made oflots of small power of two chunks. The 384 bit memory controler is really 6 x 64 bit memory controlers each controling one memory module so there's your power of 2. Shaders both in AMD and NVidia architecture are composed of 16 shader wide arrays, SIMDs, which is what really does the hard and fundaental work, so power of 2 again, TMUs and ROPs are typically clustered in groups of 4 or 8... but really it makes no real difference. It's like that for convenience, until I hear the opposite. Rendering typically works on quads of pixels, 2x2 or 4x4 so that's why they tend to make it that way for GPUs. Other than that there's no reason that I know of.


----------



## jamsbong (Feb 11, 2012)

Benetanegia said:


> GF104 and it is around 40% faster than GTX285, while GF110 is 80% faster than GTX285.



I think the real world test only shows that GF104 = GTX285 and GTX580 is 52% faster than GTX285.
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/27.html

Based on REAL WORLD values, I can expect on average, the new Kepler GK104 to be on par with GTX580. Which is what I said in previous 2 posts. So slower than Tahiti.


----------



## creepingdeath (Feb 11, 2012)

Benetanegia, does Jen-Hsun Huang give you handjobs for every post you make?   The fanboydom has crossed the ridiculous threshold.   Just understand that the performance you claim isn't possible with the architecture and specs given, especially with hotclocks gone.    GK110 also isn't released until Q3, hopefully you won't lose to much sleep over that     So GK104 will produce a high end part, because I don't see nvidia releasing a mid range GK104 card and not having a corresponding high end card (GK110) until Q3.   GK104 may come close to beating tahiti, but its definitely not a tahiti killer.  Charlie from SA commented that it is so far 10-20% slower in non physx titles than tahiti.   And before you whine about charlie, all of his leaks so far have been accurate, ALL of his fermi leaks were accurate.    Remember GK110 just taped out so it is entering ES and validation phase, which always takes 6-8 months.

Now I expect you'll go on about how GK110 is being released next week (even though it just taped out and hasn't entered validation phase yet).  Like I said don't lose sleep, don't get too hurt over this.

Now hopefully the GK104 is close to the GTX 580 for a much lower price point, I could use a replacement to my old 570.


----------



## Crap Daddy (Feb 11, 2012)

Here's another source confirming these specs:

http://www.brightsideofnews.com/new...k1042c-geforce-gtx-670680-specs-leak-out.aspx

The interesting part and I've seen this speculated in different forums, is that GK104 will be the GTX680. If this will be confirmed and based on the specs which seem to be right, I am pretty sure this card will be better than the 7970. Can't think that NV will release a GTX*80 that is slower than AMD's flagship.

I remember one post in another forum with a guy being invited to a CUDA event at the end of January and he reported back that he saw a CUDA demo running on an unspecified 28nm part which was on average 28% faster than a GTX580. Based on these specs it is entirely possible that this will be close to the performance of GK104.


----------



## Benetanegia (Feb 11, 2012)

jamsbong said:


> I think the real world test only shows that GF104 = GTX285 and GTX580 is 52% faster than GTX285.
> http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/27.html
> 
> Based on REAL WORLD values, I can expect on average, the new Kepler GK104 to be on par with GTX580. Which is what I said in previous 2 posts. So slower than Tahiti.



There's no full GF104 there, only the severely capped (both shaders and clocks) GTX460. Full GF104 would be the GTX560 Ti like I said. Also there, the GTX580 is far more than 50% faster. i.e at 1920x1200 GTX285 61%, while GTX580 100%, so 100/61 = 64% faster.

But the above is with release drivers. If you look at a more modern review, you will see that the GTX580 is around 80% faster. And GTX560 Ti (full GF104/GF114) is around 40% faster.

GK104 will handily beat GTX580 just like GF104 handily beats GTX285, based on REAL WORLD values, and specs.

Nvidia (nobody) would put 100% more SPs, 100% more TMUs, 100% more geometry, 100% more tesselators and ultimately 100% bigger die, just to let it be only 30% faster than it's predecessor (GTX560 Ti). It's not going to happen no matter how many times you repeat it to yourself. The mid-range used to be just as fast as it's predecessor when midrange meant 1/2 high-end, now that upper midrange or performance segment means 3/4 high-end, the performance chip will always be faster.

You say we don't know the efficiency of the shaders, but right next to that you claim (indirectly) that Kepler's efficiency not only on shaders but also in TMU, geometry and literally everything is 50% of what it is in GF104. It's absurd. We don't know the efficiency, right, so for all we know the efficiency might be better too so we could just as easily say it will be 3x times faster assuming 50% better efficiency and that would NOT be more outrageous than your claim saying that it MUST be only as fast as GTX580 while it has 2x the specs (hence 50% efficiency). 

I'm not claiming anything from outside this world. GK104 has almost twice the Gflops, more than twice the texel fillrate, geometry and pretty much everything else is 25% faster than GK110, because clocks are 25% higher and has the same number of GPCs and SM (tesselators). And with such a massive difference in specs, a massive difference that suggests anything from 50% faster to 150% faster, with such a massive difference, mind you, I'm just saying that it will be 25% faster. It's not an outrageous claim, it's a very very conservative guesstimate, and hopes/fanboism has nothing to do with it (this is for the troll). Neither does what happened in the past, it is just spec comparison. And then the evidence of the past just corroborates the plausability of my claim. Stay tunned because my crystall ball says I'm being very conservative, but 25% over GTX580 is what I'll claim for the time being.


----------



## jamsbong (Feb 11, 2012)

@Benetanegia OK, It is very obvious that you're a supa-dupa Nvidia fanboy. That is fine... How else Nvidia can stay afloat without support such as one like yourself.

Without getting into a fight over speculative unknown future performance of Kepler, lets get some facts straight:
GF104 = GTX460
GF114 = GTX560 TI

I suggest you do some PROPER homework before spilling out lots of nonsense.


----------



## Benetanegia (Feb 11, 2012)

jamsbong said:


> @Benetanegia OK, It is very obvious that you're a supa-dupa Nvidia fanboy. That is fine... How else Nvidia can stay afloat without support such as one like yourself.
> 
> Without getting into a fight over speculative unknown future performance of Kepler, lets get some facts straight:
> GF104 = GTX460
> ...



My friend do your own homework. GF114 and GF104 are both the exact same chip. GF104 had disabled parts just like GF100 had disabled parts. Maybe you would look more intelligent and help your casuse if you spent more time checking your facts and less time calling people fanboy.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_560_Ti/



> Getting into the fine print of NVIDIA's offer, the GeForce GTX 560 Ti is based on NVIDIA's new GF114 chip. As far as its specifications and transistor-count go, the GeForce GTX 560 Ti is identical to the GF104 on which GTX 460 was based, except that it has all 384 of the CUDA cores physically present enabled



http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_460_1_GB/



> NVIDIA's GeForce Fermi (GF) 104 GPU comes with 384 shaders (CUDA cores) in the silicon but NVIDIA has disabled 48 of them to reach their intended performance targets and to improve GPU harvesting.









I'm EXTREMELY curious as to how are you going to (try to) spin this in your favor.


----------



## Steevo (Feb 11, 2012)

Crap Daddy said:


> Let's remember the 6970 has 2.7 TFlops while GTX580 has something like 1.5 so if we are talking about gaming benchmarks I don't think that's a factor.



That would apply if we were comparing VLIW to CUDA, however we are comparing close to the same architecture.


----------



## Prima.Vera (Feb 11, 2012)

m1dg3t said:


> What, what is MLAA? It's useless, ATi has no innovative feature's like that!



I like nvidia's FXAA to, but the biggest problem is NOT implemented in drivers in control center like ATI. And there are so many games out there that don't have any AA support.

 Is it that difficult nvidia to implement FXAA into drivers also????shadedshu:shadedshu


----------



## Benetanegia (Feb 11, 2012)

Prima.Vera said:


> I like nvidia's FXAA to, but the biggest problem is NOT implemented in drivers in control center like ATI. And there are so many games out there that don't have any AA support.
> 
> Is it that difficult nvidia to implement FXAA into drivers also????shadedshu:shadedshu



Nvidia "liberated" FXAA for anyone to use it, so it's open and afaik there's many FXAA injectors out there.

I don't know if they work with all games because I wouldn't use it and don't really care about them. Personally I find both FXAA and MLAA to degrade visual quality rather than enhance it (textures mainly, but also shaders).


----------



## jamsbong (Feb 12, 2012)

@Benetanegia OK NV fanboy.  GF114 is an update on the GF104. Like you've described, the GF104 is has some bits fused whereas GF114 is the full blown chip.

So when you said "GF104 handily beats GTX285", what you really means is GF114 beats GTX285. I am using GF114 in my computer and is a good card. However, I'll never call my card a GF104.

Since you can't even get a simple task of getting the numbers (a simple difference between 1 and 0) correctly, what makes your fantasy of Kepler vs Tahiti speculation believable/convicing?


----------



## crazyeyesreaper (Feb 12, 2012)

Nvidia's high end will be around 45-55% faster then the GTX 580

Nvidia will be faster but its going to be the exact same situation we have seen time and again


HD 6970 at launch was around $370

GTX 580 at launch was around $500

$130 price difference

6970 to 7970 is around 40% performance difference
GTX 580 to 680 is expect to be 45-55%

this means essentially the difference we saw between 6970 and GTX580 aka 15%

is about the same difference we will see between a GTX 680 and HD 7970

Nvidia will be faster by around 15% and charge  $100 premium for the performance difference,


----------



## Benetanegia (Feb 12, 2012)

jamsbong said:


> @Benetanegia OK NV fanboy.  GF114 is an update on the GF104. Like you've described, the GF104 is has some bits fused whereas GF114 is the full blown chip.
> 
> So when you said "GF104 handily beats GTX285", what you really means is GF114 beats GTX285. I am using GF114 in my computer and is a good card. However, I'll never call my card a GF104.
> 
> Since you can't even get a simple task of getting the numbers (a simple difference between 1 and 0) correctly, what makes your fantasy of Kepler vs Tahiti speculation believable/convicing?



What you don't get, insignificant offending boy, is that what matters is not what Nvidia really released back then, but what they wanted to release, what *they aimed for*. They released GTX480, but they wanted, designed and engineered for GTX580. They failed and we all know that story. Will they fail now? NO. Not according to the info everywhere (even Demerjian). So the fact is that in the past Nvidia always aimed at 80% performance increase, and in the last generation with the second try, they nailed it. This time they aimed for the same (it's obvious on the specs) and they got it right at the first time, plain and simple.

The specs are out and the number of SPs is not 480 (comparatively) like it was with GTX480, and clocks are not 600-700 Mhz. They didn¡t fail to meet their goals. Specs are 1536 SPs / 950 Mhz and not 1440 SP / 800 Mhz or something lilke that. They got what they wanted and they aimed for 100% imrovement, minus x% for innefficiencies.

Your point has been wrong all the time. The fact is they doubled up the number of SPs per SM, from 48 up to 96. If the resulting 2.9 Tflops chip was going to be just as fast as the 1.5 Tflops chip, they would have designed it for 1.45 Tflops in the "old fashion", I mean, they woudn't have doubled up the SP count and die size like that. They would have put 768 "Fermi-like" SP and be done with it.

Keep calling me fanboy, please one more time at least. I enjoy it, because you are so wrong and you so desperately (and wrongly) think that it makes your point any more valid.


----------



## xenocide (Feb 12, 2012)

crazyeyesreaper said:


> Nvidia's high end will be around 45-55% faster then the GTX 580
> 
> Nvidia will be faster but its going to be the exact same situation we have seen time and again
> 
> ...



You're also assuming Nvidia will go above AMD's pricing scheme.  I think Nvidia's going to go under it.  Let's face it, the saving grace for AMD cards is their price, but with the 7xxx series, so far even that isn't amazing, Nvidia could easily drop the prices on their current offerings and match AMD while still turning a substantial profit.  If Nvidia markets a card with equivalent performance to the HD7950 for like $300, they would crush AMD in the first few weeks of sales.  Id they kept their flagship card around $500-600, with a lower model around $450, they would be in position to just devour AMD's sales, or force AMD to restructure their entire pricing scheme, which would still take time and result in lost sales.

AMD already probably lost out on the fact that their cards have been--and were more so at launch--in low supply.  Nvidia has had several extra weeks, going on months, to stock up, so they will be able to launch a whole line of cards, in high supply, that could potentially offer better or at least comperable performance.  This is all speculation, but Nvidia from my perspective seem to be in a very good spot.


----------



## crazyeyesreaper (Feb 12, 2012)

GTX 680 is priced around $600-670 they have the performance crown its not unheard up

the US $ is worth less then it used to be, high end Nvidia cards have cost this before $8800 GTX,

GTX 280 launched at $650

GTX 480 launched at $500


The 7970 is fast the 680 will be faster, 670 will be priced the same as the 7970 and offer the same performance,


there is no sales to lose or gain this is the exact same situation as the 

GTX 400 series vs HD 5000 and GTX 500 vs HD 6000 in terms of performance differences and prices difference, but thats all im really at liberty to say.


just look back at previous launches its always the same, these last few years AMD launches first Nvidia follows, Nvidia retakes single GPU crown but also costs more, thats just the way it goes.


look at the GTX 570 vs HD 6970,  6970 costs a tiny bit more in the begining but also won in the majority of benchmarks,   in the end  they were equal prices averaged out,

GTX 670 vs 7970 will be the same situations as 570 vs 6970   Nvidias 680 will take the performance crown,

Hell a GTX 480 is only on average 50% faster then a GTX 280, in most games,

this with what info I have appears to be the same difference between a GTX 580 and 680 around 50% avg delta, some get as high as 80% but the average is 45-55% in general performance

The difference you see below between a 280 and a 580 is what we will see between a 580 and a 680


----------



## xenocide (Feb 12, 2012)

The price on thier high-end cards is trending down, and has been since the 8800.  Lets look at Nvidia's highest performing single cards launch prices;

8800GTX - $650
9800GTX - $350
GTX280 - $650
GTX480 - $500
GTX580 - $500

Compared to AMD\ATi's launch prices;

HD3870 - $240
HD4870 - $300
HD5870 - $400
HD6970 - $370
HD7970 - $550

It seems like both companies are just working towards the $500-550 flagship price point.  Aside from the 9800GTX which dipped--because it was basically an 8800GTX on a lower stepping--they have continued a trend of dropping the price of their highest performing single card (not counting post lineup releases like the 285 and 8800 Ultra, or Dual-GPU cards).  AMD\ATi on the other hand, have steadily increased the price.

I'm thinking Nvidia will launch a GTX680 around $550, a GTX670 around $450, and a GTX660Ti around $350.  The 670 will probably handily beat the HD7970, with the 660Ti coming close to it.  Obviously this is just speculation, but I'm not just throwing numbers out, it would put it in line with most of the rumors and the pricing structure Nvidia currently uses.


----------



## crazyeyesreaper (Feb 12, 2012)

wrong your forgetting the 580 3gb which is in fact Nvidias highest end single gpu try again 

GTX 580 3GB was $600+ at launch

your also forgetting the 8800GTX ULTRA which was $700 at launch


you can discount them if you like compared to the mainstream top single gpu, but in terms of SINGLE GPU SKU, nvidia hasnt been dropping price what they have done however is offer better value at the typical top end,

8800 Ultra - $800+
9800GTX - $350 
GTX280 - $650
GTX480 - $500
GTX580 3GB - $600+

Nvidia's pricing is more consistant,  AMD prices have gone up because they can now compete with Nvidia on even footing most of the time,

Compared to AMD\ATi's launch prices;
HD2900 - $400 - could not compete with nvidia
HD3870 - $240 - far cheaper then the 2900 series that came before performance was the same, didnt compete well
HD4870 - $300 - more competitive good price point started gaining market share, still behind on performance, but was good value
HD5870 - $400 - strategy change, launched first with DX11, no competition in the market took a chunk of market share,
HD6970 - $370 - fouled up release date, Nvidia countered before AMD could release, meaning GTX 480 it came out on par with but Nvidia retook the crown with the 580 1.5gb and 3gb models
HD7970 - $550 - again AMD release first, offers more performance, Nvidia will counter with a faster chip that costs more, common sense from data presented over time would make this the logical outcome.

Nvidia will Launch a GTX 680 that like the 580 vs 6970 and 5870 vs GTX 480 before it costs more but is also faster, thats about what it comes down to, and you can say what you like about AMDs prices but if there so damn bad why are most of the e-tailers people like to deal with are sold out and scrambling to get more stock,  whats more is AMD is getting more fab time then Nvidia currently


----------



## jamsbong (Feb 12, 2012)

@Benetanegia I could continue this pointless argument with an NV fanboy such as pointing all the mistakes that you've made on the last post alone but it is time to move on. 

If NV have created something fantastic (i.e. 50% faster than GTX580 card) and it is stable enough to work on non-TWIMTBP titles. I won't mind cashing one for myself. If not, then Tahiti. A simple wait and see situation. Cheers.


----------



## xenocide (Feb 12, 2012)

I actually explicitly said not counting cards like the GTX285 and 8800 Ultra because they technically came out after the initial lineup launched.  They were usually just super high-end offerings that were made to address performance deficits or because they could.  In the case of the GTX580 3GB, it was because super high-end users needed more VRAM, this only really affected people using 3 Display setups, so it was an incredibly niche product.

If we wanted to go crazy there are all sorts of products released that are technically better, the HD5970 is to this day ridiculously powerful, and surprisingly cost efficient.  I also omitted the HD4890, because it was launched months after the rest of the 4xxx series.

My listings are still accurate.  There are outliers, but for the most part all of those cards were the original high-end GPU of their corresponding series.


----------



## crazyeyesreaper (Feb 12, 2012)

dosent change the fact 680 will be priced at $600 + most likely in the $650-675 range with after market cooled cards hitting $700,

but your free to believe what you wish,


----------



## Crap Daddy (Feb 12, 2012)

crazyeyesreaper said:


> dosent change the fact 680 will be priced at $600 + most likely in the $650-675 range with after market cooled cards hitting $700,
> 
> but your free to believe what you wish,



What you call "680" at 600$ + will probably get another name. All that we see now is the GK104 which will probably be faster by a hair than the 7970 (but enough to claim its the fastest card) with some disadvantages (lower mem bandwidth and probably already very high clocked at stock to meet the target of being faster than the 7970) and some say this will be the 680. Now this card will not cost 600$ but neither 300$ as it was reported so I would expect  somewhere between 450-500. As it was reported, same chip with some disabled stuff and proly clocked lower will make the 670 part, perf between 580/7950 and 7970 for 350-400$.  The big boy will be out later and there we can expect 600$ plus.


----------



## radrok (Feb 12, 2012)

crazyeyesreaper said:


> after market cooled cards hitting $700,



Here comes to my mind EVGA Hydro Copper


----------



## m1dg3t (Feb 12, 2012)

crazyeyesreaper said:


> dosent change the fact 680 will be priced at $600 + most likely in the $650-675 range with after market cooled cards hitting $700,
> 
> but your free to believe what you wish,



Hell the 580 is still more expensive than the 7970! At most place's by me anyways  Can't wait to see the new pricing


----------



## Benetanegia (Feb 12, 2012)

jamsbong said:


> @Benetanegia I could continue this pointless argument with an NV fanboy such as pointing all the mistakes that you've made on the last post alone but it is time to move on.
> 
> If NV have created something fantastic (i.e. 50% faster than GTX580 card) and it is stable enough to work on non-TWIMTBP titles. I won't mind cashing one for myself. If not, then Tahiti. A simple wait and see situation. Cheers.



Giving up on time is good practice when you are so wrong, so well played. lol


----------



## user21 (Feb 12, 2012)

Time to kick back


----------



## ViperXTR (Feb 13, 2012)

> I like nvidia's FXAA to, but the biggest problem is NOT implemented in drivers in control center like ATI. And there are so many games out there that don't have any AA support.
> 
> Is it that difficult nvidia to implement FXAA into drivers also????



FXAA is already inside the recent nvidia drivers

1. download nvidia inspector
2. open the advanced driver settings
3. look at the advanced configs (scroll down)
4. set FXAA to 1 (default 0/off)

there are also some hidden settings there like framecap/framerate limit, SLI and/or AA flags etc.

also, some moar rumour tablez






http://forum.beyond3d.com/showthread.php?p=1619912


----------



## xenocide (Feb 13, 2012)

Interesting chart.  I wonder why the AA never gets put above 4x...


----------



## crazyeyesreaper (Feb 13, 2012)

so according that chart.... 3Dmark 11 is 7% difference 


Total Average = is 12% difference  across all those tests


----------



## CrAsHnBuRnXp (Feb 13, 2012)

I just want benchmarks already so i know what to buy.


----------



## Recus (Feb 13, 2012)

Borderlands 2 or new Brothers in Arms running on Kepler? : D


----------



## Crap Daddy (Feb 13, 2012)

Aliens: Colonial Marines? PhysX? GTX680?

As for that suspicious table, based on the specs which I think we can agree that are more or less accurate, this table was done by somebody who has done his homework. 30% plus on average above the GTX580 which brings us to that 10% over the 7970. If you look carefully you'll see the clocks - 1050 and 1425 - very high for a stock card and above the reported 950 for GPU. It is also done at 1080p where the mem bandwidth disadvantage is less pronounced. 

So what I'm saying is that if this is close to real then NV will launch the GK104 under the name GTX680, a slightly faster card than the 7970 with certain weak points due to the fact that the chip was initially designed for the performance segment but after AMD's launch it can fulfill other expectations. Price? Neither 300$ nor 550$


----------



## sergionography (Feb 15, 2012)

I doubt these rumors are true, i heard about nvidia dropping their hot clocks but changing the structure of the gpu this much i dont think its possible in such a short amount of time, as far as i thought kepler is a fermi die shrink with some tweaks.
and another note is that this article claims gk104 is a 340mm die which is nvidias mid range, the hd 7970 has a die size of 375mm, so much for the "we expected more from amd" talk
not to mention nvidias high end is said to have a 550mm die size, well amd could easily build a gpu that big and pack more transistors but that is usualy a very bad business choice, and nvidia suffer from it almost every time.


----------



## Benetanegia (Feb 15, 2012)

sergionography said:


> *i dont think its possible in such a short amount of time*, as far as i thought kepler is a fermi die shrink with some tweaks.



AMD/Nvidia do not start working on their chips only after releasing the previous one. They work for years on every chip. Sometimes as much as 5 years depending on how different it is. Nvidia is already working on Maxwell and whatever comes next. AMD is already working on their next 2 architectures too, Sea Islands and Canary Islands. The work on Kepler started many years ago, maybe even before GTX200 was released or shortly after.

As far as Kepler goes, yes it's a tweaked Fermi in 99% of cases, you can see it in the specs and schematics. The only difference is that they dropped the hot-clocks, which makes SPs substantially smaller and doubled the amount of them per SM to compensate.

No one knows exactly how much smaller SPs are, but just as an example of how much clocks can affect the size of some units, AMD Bart's memory controler is half as big as Cypress/Cayman because it's designed to work at ~1000 Mhz instead of >1200 Mhz. Those extra 200 Mhz make the memory controler in Cypress/Cayman twice as big. So in case of Kepler and looking at specs and 340 mm2, we can assume that non hot-clocked SPs are around half the size.


----------



## sergionography (Feb 16, 2012)

Benetanegia said:


> AMD/Nvidia do not start working on their chips only after releasing the previous one. They work for years on every chip. Sometimes as much as 5 years depending on how different it is. Nvidia is already working on Maxwell and whatever comes next. AMD is already working on their next 2 architectures too, Sea Islands and Canary Islands. The work on Kepler started many years ago, maybe even before GTX200 was released or shortly after.
> 
> As far as Kepler goes, yes it's a tweaked Fermi in 99% of cases, you can see it in the specs and schematics. The only difference is that they dropped the hot-clocks, which makes SPs substantially smaller and doubled the amount of them per SM to compensate.
> 
> No one knows exactly how much smaller SPs are, but just as an example of how much clocks can affect the size of some units, AMD Bart's memory controler is half as big as Cypress/Cayman because it's designed to work at ~1000 Mhz instead of >1200 Mhz. Those extra 200 Mhz make the memory controler in Cypress/Cayman twice as big. So in case of Kepler and looking at specs and 340 mm2, we can assume that non hot-clocked SPs are around half the size.



yes but fermi was supposed 2 be nvidias architecture for the years to come, kepler is a descendant kinda like piledriver will be for bulldozer.  
but well i guess that makes sense doing so in order to scale at high clocks kinda like cpus having longer pipelines to scale at high frequency but there is no way it will make that much difference(especially since the whole point of architecture that aim for high frequency is to make smaller chips with less hardware and lower ipc but with more throughput, but thats in cpus im not sure about gpus), mayb the 1536 is refering to the bigger gtx680/780 which would have a 550mm2 die size(read that in previous leaks/rumors)
because even considering the die size which is much smaller than the 580 yet it triples the core count
even with 28nm thats only 40% smaller and its near impossible to get perfect scaling


----------



## Benetanegia (Feb 16, 2012)

sergionography said:


> yes but fermi was supposed 2 be nvidias architecture for the years to come, kepler is a descendant kinda like piledriver will be for bulldozer.
> but well i guess that makes sense doing so in order to scale at high clocks kinda like cpus having longer pipelines to scale at high frequency but there is no way it will make that much difference(especially since the whole point of architecture that aim for high frequency is to make smaller chips with less hardware and lower ipc but with more throughput, but thats in cpus im not sure about gpus), mayb the 1536 is refering to the bigger gtx680/780 which would have a 550mm2 die size(read that in previous leaks/rumors)
> because even considering the die size which is much smaller than the 580 yet it triples the core count
> even with 28nm thats only 40% smaller and its near impossible to get perfect scaling



Don't let the number of SPs blind you, they didn't really tripple the number of cores. Like I said dropping the hot-clocks probably allows them to put 2x as many as if they were Fermi cores in the same space, *but they are only half as fast*. They are trading 2x shader clock for 2x the number of SP.

Based on die area GK104 has to have around 3.6-4.0 billion transistors, that's twice as much as GF104/114, the chip it's based on. Would you have doubted so much if Nvidia had made a 768 SP Fermi(ish) part with 256 bit memory interface? Twice the SPs at twice the number of transistors, while keeping 256 bit MC. It's 100% expected don't you think? And now they have this 768 SP "GF124" and it's here where they drop hot-clocks, thus making the SP much smaller, and allowing them to put 2x as many of them: GK104 is born.

Also remember that doubling SPs per SM is a lot more area/transistor efficient than doubling the number of SMs.

And to finish, never look at die size for comparing, look at transistor count. Scaling varies a lot from one node to another, and transistor density can change a lot as a node matures, i.e. look at Cypress vs Cayman. GK104 has twice as many transistors as GF104 and that's all that you should look at. It's pointless to even compare to GF100/110, because GF100 is a compute oriented chip, with far more GPGPU features than GF104/114 and GK104. Even GF104 is 60% as big as GF100, but it has 75% of gaming performance.


----------



## sergionography (Feb 16, 2012)

Benetanegia said:


> Don't let the number of SPs blind you, they didn't really tripple the number of cores. Like I said dropping the hot-clocks probably allows them to put 2x as many as if they were Fermi cores in the same space, *but they are only half as fast*. They are trading 2x shader clock for 2x the number of SP.
> 
> Based on die area GK104 has to have around 3.6-4.0 billion transistors, that's twice as much as GF104/114, the chip it's based on. Would you have doubted so much if Nvidia had made a 768 SP Fermi(ish) part with 256 bit memory interface? Twice the SPs at twice the number of transistors, while keeping 256 bit MC. It's 100% expected don't you think? And now they have this 768 SP "GF124" and it's here where they drop hot-clocks, thus making the SP much smaller, and allowing them to put 2x as many of them: GK104 is born.
> 
> ...



yes i believe you man, it was just pretty shocking thats all, now we might be able to compare amd vs nvidia a bit more closely based on shader count
as for cypress and cayman it seems like it happened from the other extreme isnt it? as far as i remember it was pretty much getting rid of the sps that werent being utilized and change vliw5 to vliw4 and ended up with smaller SM's that performed the same as their predecessor but at a smaller size allowing them to fit more SM's into the 6970 so even though shader count was less, it performed like 20% better.

though i still think there is still more behind this, having hot clocks has its benefits, but has its limitations too, like i heard they dont scale well when frequency increases, while amd would clock while increasing performance at a constant rate(i could be wrong tho idk much about the bitty details in gpu)


----------



## TheoneandonlyMrK (Feb 16, 2012)

Crap Daddy said:


> This is not going to be 50% faster than 7970. Judging by the specs it should fall between 7950 and 7970 at a rumored 300$.
> GK110 will probably be the Tahiti killer. At a price...



yeh ,that was sarcasm from me ,so i agree with you dude

but in all honesty im betting these will arrive cheap and be below a 7950 in performance


----------



## Benetanegia (Feb 16, 2012)

sergionography said:


> though i still think there is still more behind this, having hot clocks has its benefits, but has its limitations too, *like i heard they dont scale well when frequency increases*, while amd would clock while increasing performance at a constant rate(i could be wrong tho idk much about the bitty details in gpu)



Yes, that's correct and the reason that Nvidia stopped using hot-clocks with Kepler.

The reason they used hot-clocks before was apparently to have lower latencies and better single threaded/light threaded performance, so that compute apps would benefit. Remember the first chips to have hot-clocks on shaders were running at 600 Mhz core clocks and below, so shaders run at <1200 Mhz. Now even without hot-clocks they will be running at 1000 Mhz so that's probably enough*. Latencies are further reduced with a shorter pipeline (due to lower clocks) and other means that are required for GPGPU anyway.

Fermi shaders running at 2000 Mhz would have been overkill for what it's really needed and consume more than two 1000 Mhz shaders. A compute GPU needs first and foremost multi-threaded performance, so long as single threaded is not crap, single threaded is only required up to a certain level, so that some minor tasks don't become a bottleneck.


----------



## jamsbong (Feb 17, 2012)

Benetanegia said:


> Giving up on time is good practice when you are so wrong, so well played. lol



I'm not aware that I'm in anyway wrong nor am I giving up on anything. All I did was to be rational and put things on hold. You're mistaken again.......
I guess it is always going to be difficult for me to have a logical debate with someone who is not.


----------



## Benetanegia (Feb 17, 2012)

jamsbong said:


> I'm not aware that I'm in anyway wrong nor am I giving up on anything. All I did was to be rational and put things on hold. You're mistaken again.......
> I guess it is always going to be difficult for me to have a logical debate with someone who is not.



Maybe you should start by explaining why if it's only going to be almost as fast as GTX580, why did they put 96 SPs per SM (double) instead of say 64 SPs, or more importantly why did they double up the number of TMUs, when 64 TMU were perfectly fine for GTX580 and GK104 will have 25% higher clocks (thus 25% higher texture fillrate had it had 64 TMU intead of 128). I'm sorry but you just don't increase die size like that if it's not coming with a substantial (read justified) performance increase.

You have produced ZERO proof (I didn't expect that since nothing is fact), but also explained nothing (which I do expect) about why such a massive increase in computational power -that didn't came for free and suposed a 100% increment in transistor count- is not going to produce any performance gain.

You have not explained why a 2.9 TFlops card will not be able to beat the 1.5 Tflops card, and why if that'd be the case why didn't they just create a 1.5 TFlops (768 SP) card in the firt place. In the end that would have been easy, same architecture, half the SPs, 48 per SM. If going with 96 SPs is going to make the block 50% as (in)efficient as Fermi with 48 SP, you just don't make it 96 SP!!

So start by explaining something, anything, and stop calling fanboy as if that was any kind of argument in your favor, because it is not, it only makes you look like a 12 year old kid and an idiot. "It's going to be so, because (you think) it's going to be so, and if you think different you are a fanboy" is not an argument.



> Logic, the study of the principles and criteria of valid inference and demonstration.



More Logic:

GK104 is 340 mm2, so close to 4 billion transistor, twice as much as GF104 and 33% more than GF110, logic dictates that Nvidia did not sudenly create an architecture that is *at least 33%* less efficient than Fermi (*70% compared to GF104*), 25% higher clocks notwithstanding. Especially when they have been claiming better efficiency for almost 2 years now.


----------



## Xaser04 (Feb 17, 2012)

jamsbong said:


> I'm not aware that I'm in anyway wrong nor am I giving up on anything. All I did was to be rational and put things on hold. You're mistaken again.......
> I guess it is always going to be difficult for me to have a logical debate with someone who is not.



I am struggling to see the point of your argument here. You keep stating that Benetanegia is a fanboy and "wrong" all the time yet so far I have nothing but rational, well thought out posts from him. I may not agree with everything in his posts (actually I do agree with most of it) but I am struggling to see the "fanboy" stance you keep going on about. 

No doubt I will get called a Nvidia fanboy now despite running a HD7970 and Eyefinity.... 

One thing that does interest me about Kepler being a dieshrunk and "tweaked" Fermi is how much performance increase we can expect from future driver improvements? Driver improvements are a given with CGN as the architecture is realtively immature but what about Kepler? Could we end up with a case that Kepler comes out the gate faster than Thahiti but ends up slower in the long run due to a lack of driver improvements? 

Obviously this is still conjecture but it is an interesting avenue to investigate as I have seen some pretty big boosts in BF3 (@3560*1920) with the latest HD79xx RC driver (25/01/2012).


----------



## jamsbong (Feb 17, 2012)

@Benetanegia "but also explained nothing (which I do expect)". I've discussed this with you before, since there is no facts whatever you built on is full on nothing. No point getting into explanation mode on speculative information.

"GK104 is 340 mm2, so close to 4 billion transistor" I am not aware of this information, where did you get 4 billion transistor? Did you estimate it off the 340mm2? in other words, building a case off speculative information?

@Xaser04 no need to struggle. Just read what I've posted thoroughly and comprehend it before venting off more steam.


----------



## crazyeyesreaper (Feb 17, 2012)

at this point who gives a flying fuck? i could care less if the Nvidia Kepler GPU is Oscar the Grouch doing calculations a on a Ti-82. Kepler is coming but its not here yet, so in retrospect it dosent matter,  what its transistor count is, what its shader design is etc, because looking at specs dosent give us actual performance numbers in terms of what its capable of,

Nothing matters till we see reviews, i dont care what kepler has in the wings its still smoke and mirrors, even then its hog wash if we go on specs and theoretical maximum calculations AMD has won every time in terms of theoretical output, yet it dosent actually win, so lets just save the arguments for when we see real performance numbers, then we can bitch moan and complain about whos the greatest EVAR! and whos a loser.


----------



## Xaser04 (Feb 17, 2012)

jamsbong said:


> @Xaser04 no need to struggle. Just read what I've posted thoroughly and comprehend it before venting off more steam.



How do I vent off more steam when I havn't vented any in the first place? 

What is there to comprehend? Benetanegia replies to your posts with a well thought out reply and starting with post #64 you basically do nothing more than call him a fanboy.


----------



## sergionography (Feb 18, 2012)

Benetanegia said:


> It makes no difference really and cards are still made oflots of small power of two chunks. The 384 bit memory controler is really 6 x 64 bit memory controlers each controling one memory module so there's your power of 2. Shaders both in AMD and NVidia architecture are composed of 16 shader wide arrays, SIMDs, which is what really does the hard and fundaental work, so power of 2 again, TMUs and ROPs are typically clustered in groups of 4 or 8... but really it makes no real difference. It's like that for convenience, until I hear the opposite. Rendering typically works on quads of pixels, 2x2 or 4x4 so that's why they tend to make it that way for GPUs. Other than that there's no reason that I know of.



ok i just came across these older posts in the forum while going through it, and there is a few things to note or take into consideration regarding the 256bit memory controller

  1-gk104 was meant to be the mid/high range card and not the high end

  2-usually cards like the gk104 might end up in the mobile segment therefore must be built with both worlds in mind(tho im not sure whether nvidia uses the second fastest desktop chip for mobile or the third fastest so correct me if im wrong)

  3- considering the fact it was designed to be the med/high there are 2 reasons on why nvidia would choose 256bit, the first reason is to purposely limit the performance to allow for faster and more expensive cards to be released later or to place the card in the preferred place they desire in the market in terms of performance and price (similar to what they did with gtx460 768mb and gtx460 1gb)  the second explanation could be that the card doesnt benefit from more bandwidth and would only make it more expensive for minor gains.


 4- we dont know whether nvidia will call the gk104 gtx660 or 680, and if they do call it gtx680 is it because they failed to release gk110 due to yield issues or is it because the gk104 is sufficient to compete?

if it does end up to be gtx680 it would probably be the first time nvidia would have smaller die sizes than amd, tho after all the issues they had with fermi and manufacturing such a change in methodology is not all that shocking

but overall nvidia seems to have learned alot from amds strength(i wish amd would do the same from nvidia) as amd(the graphics division strictly)  had more experience with facing manufacturing difficulties and knowing what to expect, and used to work according to due dates,i read an article by one of the chief engineers at ati where he explains how things work at ati(i will try to find the article and post it up, it was about rv670 and how amd jumped the wagon first for 55nm and how the process of releasing products works)


----------



## RigRebel (Mar 18, 2012)

NC37 said:


> The end of NV's monolithic GPU era is at hand...was about to say...Bout freaken time! ATI was slower at first when they switched but I knew eventually NV would have to change too.
> 
> Very interested to see how well NV does at ATI's own game.



You have it twisted... Nvidia came out with the Fermi architecture and cuda cores (which drastically changed Dx11 gaming and Tesallations) while AMD was still playing in yester year with the older VLIW4 architecture... It was AMD that copied Nvidia's ground breaking Fermi architecture and called it GCN... Nvidia came out with the multistreaming multicore fermi design first way before AMDs GCN >  http://www.nvidia.com/object/fermi_architecture.html?ClickID=azzwwsat59szk05t0aa0sll0zsrttknlzsks and over a year later > http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review Notice this review states AMD was using older single line VLIW4 architecture prior to the 7970 while Nvidia was using the multicore and multistreaming Fermi architecture. So you have it backwards... Nvidia revolutionized the GPU with Fermi then AMD copied it to catch up and CHANGED THEIR chip and called it GCN. 

Don't get it twisted... It's Nvidia's design that AMD copied, tweeked it, called GCN and then said "look we win" lol .... Or you could say Nvidia came out with the Big Mac and over one year later AMD came out with Le' Big Mic! lol Now suddenly, all the AMD/ATI noobs that don't know the history or any better say "look a new sandwich, we win!"... lol  It's Nvidia's game... always has been and always will be. AMD is just  playing at it. And to prove it, the 700 series is gonna come out and steal AMDs thunder (again) till they catch up ANOTHER year or 2 later. :bounce:


----------



## RigRebel (Mar 18, 2012)

crazyeyesreaper said:


> at this point who gives a flying fuck? i could care less if the Nvidia Kepler GPU is Oscar the Grouch doing calculations a on a Ti-82. Kepler is coming but its not here yet, so in retrospect it dosent matter,  what its transistor count is, what its shader design is etc, because looking at specs dosent give us actual performance numbers in terms of what its capable of,
> 
> Nothing matters till we see reviews, i dont care what kepler has in the wings its still smoke and mirrors, even then its hog wash if we go on specs and theoretical maximum calculations AMD has won every time in terms of theoretical output, yet it dosent actually win, so lets just save the arguments for when we see real performance numbers, then we can bitch moan and complain about whos the greatest EVAR! and whos a loser.



LOL ... AMD copied the Fermi and called it GCN... read my previous post and get a clue. Nvidia created the multicore multistreaming Fermi architecture WAYYYY before AMD copied it and called it GCN and said "we win" lol. Better get your fact straights. If AMD is "currently" wining on computations it's cause they ditched their looser single stream VLIW4 6000 series architecture and tweeked on Nvidia's 500 series Fermi design and called it GCN... You want benches?... Look at the 500 series that tore AMD a new one on Dx11 and tessallations so bad that AMD had to change/copy to a similar architecture that "magically, low and behold" incorporates a design for better tessallations and Dx11 FPS. LOL DUH they had to to catch up! Too bad they still couldn't keep Dx9 at a decent lvl like the 500series did. Why couldn't they ? Because they didn't invent the technology they just stole it and they don't know wtf they are doing! lol  And all these noobs (that are acting like the 700 series is just a myth trying to keep up with AMD 7000 series) have it so twisted it's not even funny.  AMD came out with the 7000 series just to catch up to Nvidia's Fermi. The soon to be released Nvidia 700 series is a redesign of the currently exsisting 500 series Fermi architecture and has probably been in design way before the GCN since the 600 series in Mobile is already out. You AMD noobs should get your GPU history right before you post... 

The ONLY time ATI/AMD had any real self-ingenutiy advantage over NVIDIA was way back when Doom3 (@2004 over 8 years ago) came out because AMD had onboard decoding and encoding and Nvidia's driver based codeing had a serious problem whith DOOM3 when it first came out. That small 6 month to a year hic-up was AMD's ONLY shinning moment over Nvidia and Nvidia has been knocking ATI/AMD in the dirt ever since. AMD is still trying to catch up and you're just catching one hop of the leap frog that happens to be AMD's turn... But again, the only jump forward AMD is making is that they got smart, ditched the single stream VLIW4 6000 series architecture  and copied to a similar multistream/reading Fermi style architecture. It's just clever marketing that they are describing their newest chip design in reviews like it's revolutionary... well it WAS revolutionary... over a year ago when Nvidia CREATED IT and called it FERMI!

http://www.nvidia.com/object/fermi_architecture.html?ClickID=azzwwsat59szk05t0aa0sll0zsrttknlzsks (<that is actually over 1-2 years old despite the 2012 copyright because it states the 512 cuda cores which has already been in exsistance since the GTX 580) 
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review (<and there is AMDs copy cat GCN version)
Don't the GNC core pictures (page 3) for the Radeon 7970 look familar ?? lol They should, similar architecture profile pictures were in the previous link's FERMI architecture press release over a year ago! lol The GCN architecture =  a.k.a Le' Big Mic lol - over a year later lol

 ... so stick those theorectical copy cat numbers up your GPU. lol I doubt you'd believe anything but the crud AMD Radeon is selling you (cause they're second best) anyways. People are so ignorantly sold on the "drama" that Nvidia is the big selfish Giant and AMD the independent hero that people  don't actually research or know the facts... If it weren't for the Fermi architecture with better Dx11 FPS, better Tessallations and multi-reading streams, we wouldn't have the FPS or the Dx11 Games that we do. Hardware always spurrs on new games and software that take advantage of the hardware technology and Nvidia's Fermi architecture spurred Dx11 and tessallations GREATLY in games like Skyrm, BF3, Batman AC, Witcher 2 and more ... you should be thanking them, honestly. I know AMD secretly is because without coping FERMI AMD would be AM-DONE. lol rolfloflmaoftwpwn!


----------



## sergionography (Mar 18, 2012)

RigRebel said:


> LOL ... AMD copied the Fermi and called it GCN... read my previous post and get a clue. Nvidia created the multicore multistreaming Fermi architecture WAYYYY before AMD copied it and called it GCN and said "we win" lol. Better get your fact straights. If AMD is "currently" wining on computations it's cause they ditched their looser single stream VLIW4 6000 series architecture and tweeked on Nvidia's 500 series Fermi design and called it GCN... You want benches?... Look at the 500 series that tore AMD a new one on Dx11 and tessallations so bad that AMD had to change/copy to a similar architecture that "magically, low and behold" incorporates a design for better tessallations and Dx11 FPS. LOL DUH they had to to catch up! Too bad they still couldn't keep Dx9 at a decent lvl like the 500series did. Why couldn't they ? Because they didn't invent the technology they just stole it and they don't know wtf they are doing! lol  And all these noobs (that are acting like the 700 series is just a myth trying to keep up with AMD 7000 series) have it so twisted it's not even funny.  AMD came out with the 7000 series just to catch up to Nvidia's Fermi. The soon to be released Nvidia 700 series is a redesign of the currently exsisting 500 series Fermi architecture and has probably been in design way before the GCN since the 600 series in Mobile is already out. You AMD noobs should get your GPU history right before you post...
> 
> The ONLY time ATI/AMD had any real self-ingenutiy advantage over NVIDIA was way back when Doom3 (@2004 over 8 years ago) came out because AMD had onboard decoding and encoding and Nvidia's driver based codeing had a serious problem whith DOOM3 when it first came out. That small 6 month to a year hic-up was AMD's ONLY shinning moment over Nvidia and Nvidia has been knocking ATI/AMD in the dirt ever since. AMD is still trying to catch up and you're just catching one hop of the leap frog that happens to be AMD's turn... But again, the only jump forward AMD is making is that they got smart, ditched the single stream VLIW4 6000 series architecture  and copied to a similar multistream/reading Fermi style architecture. It's just clever marketing that they are describing their newest chip design in reviews like it's revolutionary... well it WAS revolutionary... over a year ago when Nvidia CREATED IT and called it FERMI!
> 
> ...



umm you need to chill lol, we know very well how gtx 480 and gtx 470 went lol, even the 500 series while fast they were never the efficient, yes gtx 580 outperformed a 6970 by about 20% but it had a die size of 550mm2
AMD could e easily built a die that big nbfit more transisters and would still match gtx580 and at the same power consumption or even lower, if you truly did ur research you would know that, but I agree Fermi  did introduce revolutionary tech in the same way bulldozer did, tech that will only mature in time but wasn't all refine at release


----------



## RigRebel (Mar 18, 2012)

sergionography said:


> umm you need to chill lol, we know very well how gtx 480 and gtx 470 went lol, even the 500 series while fast they were never the efficient, yes gtx 580 outperformed a 6970 by about 20% but it had a die size of 550mm2
> AMD could e easily built a die that big nbfit more transisters and would still match gtx580 and at the same power consumption or even lower, if you truly did ur research you would know that, but I agree Fermi  did introduce revolutionary tech in the same way bulldozer did, tech that will only mature in time but wasn't all refine at release



I know full well about the GTX 480 delays and bugs and revisions and both the Gtx 480 Gtx 470 barely performing on exact par with ATI... but nice of you to presume I don't just because you do lol :shadedshu .. If I were you I'd calm down and read everything carefully before I comment. lol... did I not state history and info from way back in 2004? Wouldn't that preceed the GTx 470 ??? lol. I didn't comment on that series because that wasn't the point... Just because I didn't comment about that series does nothing to state I don't know about it lol especially when it wasn't really pertinent to the subject because rest assured I'm sure Nvidia is aware of it's past mistakes and they will probably have no baring on the 700 series IMO...  But thanks for bringing it up  

I'm also very sure that based on the fact that nvidia pioneered the fermi architecture and that it's still fairly new and fresh that it has a long way to go and more possibilities to hit because what's the sense of creating a whole new architechture if it doesn't have headroom to accomidate the next several years ? And is that not an architecture that Nvidia created and AMD is merely copy catting one step at a time?... I'm sure that Nvidia has years of plans for Fermi and limitless possibilites THEY CREATED IT.. I'm sure they know full well all it's avenus and applications; that it's leaps and bounds beyond AMD in a similar architecture. That the 700 series will infact crush the current wannabee attempt from AMD because currently (in the $200-$300.00 market where the real market war is won and lost) AMDs barely released 7850 had referrence clock benches BARELY beating the Nvidia's 560 Ti (which has been a phenomenol price point card!) in Dx11 and failed misserably in Dx9 which Nvidia still does well. Not to mention, that the 7850 has 1gib more than the 560 Ti and the reference board only performed marginly better than the referrence 560 Ti.  Now what's the big deal with Dx9? Not much really except the actually the most demading game for video rendering and drawing right now is actually SC II in 4v4 mode which is Dx9 and the 560 ti still killed the 7850 by 20FPS.  So AMD is releaseing a barely better than version that does worse on the old stuff ? noooicccce. Again, no fear whatsoever that Nvidia is going to knock AMD back in the dirt! 

Do you work in any technology manufacturing field ? I do, I work with years apon years of laser technology. I know what stages are involved in many avenues of product development from design to deploy to da money! I know that something you see tomorrow from Nvidia was probably on a drawing board 2-3 years before you saw it, that they probably had a mock up model 1 year ago and a functioning test model atleast 6months to 1 year ago...lol And I certainly know how to do my homework but thanks for presuming I don't just because you do and I didn't mention something off topic. Learn not to presume so much about what's not said lol 


PS... I've even done field trips lol .. I've been (on more than one occasion) to one of ATI's engineering facilities and talked first hand with PCB, CPU and RAM engineers on Radeon cards AND have worked for a major gaming company's coporate center and have worked with unrestricted models of the Nvidia 8000 series... have you ? lol 

Pss.. *Bulldozer is a joke* and an extreme dissapointment at the time to any true gamer with half the sense to read benchmarks because single threading was horrible and @98% of the games out there are single threaded....Windows had to come out with a patch just to set it right. Plus, all they did was create hyperthreading pipes so large you could fit a truck through them. Lol a lot of good that did for gaming (which is pertinent to this discussion because you mentioned a gaming graphics card and the Bulldozer in the same paragraph pressumably as having linked innovation) because the only games I know of for hyperthreading are Civ V and Oblivion... PLUS, That's like putting 22's on a Bonneville and calling it badass and innovative! rofl And the Bulldozers still didn't top i7s; and, the first batch of FX 6100 that came to my local CompUSA were all BAD and wouldn't work with the 990Fx chipsets lol. I stood there watching as my good tech friend behind the counter put every FX6100 chip they had left on a chip tester/meter and one after another were bad! This was only after 2 different customers came in to swap FX6100s they just bought 2-3 times over! If you're going to use an example like that then YOU should deffenitely DO YOUR RESEARCH and maybe a little real-world expierence first! LoL And, pick a better example lol    And before you start with the 2nd gen Iseries B2-B3 problem let me stop you there. That was every bit a Motherboard bridge problem effecting Sata III and affected motherboards and MBoard manufactures not the fault of the CPU itself that they didn't get the chipset right.

So to recap: before you start a rebuttle and start spitting out stuff you clearly need to reasearch or look at yourself better
1. make sure your points are on topic or relevant because last series 480s and 470s clearly are not. 
2. Make sure you pick better examples than the Bulldozer LOL fail 
and finally 3. Make sure you read clearer, do your own research and know who you're talking to and what their expierence is before you open your crap trap and give advice to anyone to do homework because mine is way done and you've been taken to school   lol


----------



## sergionography (Mar 19, 2012)

RigRebel said:


> I know full well about the GTX 480 delays and bugs and revisions and both the Gtx 480 Gtx 470 barely performing on exact par with ATI... but nice of you to presume I don't just because you do lol :shadedshu .. If I were you I'd calm down and read everything carefully before I comment. lol... did I not state history and info from way back in 2004? Wouldn't that preceed the GTx 470 ??? lol. I didn't comment on that series because that wasn't the point... Just because I didn't comment about that series does nothing to state I don't know about it lol especially when it wasn't really pertinent to the subject because rest assured I'm sure Nvidia is aware of it's past mistakes and they will probably have no baring on the 700 series IMO...  But thanks for bringing it up
> 
> I'm also very sure that based on the fact that nvidia pioneered the fermi architecture and that it's still fairly new and fresh that it has a long way to go and more possibilities to hit because what's the sense of creating a whole new architechture if it doesn't have headroom to accomidate the next several years ? And is that not an architecture that Nvidia created and AMD is merely copy catting one step at a time?... I'm sure that Nvidia has years of plans for Fermi and limitless possibilites THEY CREATED IT.. I'm sure they know full well all it's avenus and applications; that it's leaps and bounds beyond AMD in a similar architecture. That the 700 series will infact crush the current wannabee attempt from AMD because currently (in the $200-$300.00 market where the real market war is won and lost) AMDs barely released 7850 had referrence clock benches BARELY beating the Nvidia's 560 Ti (which has been a phenomenol price point card!) in Dx11 and failed misserably in Dx9 which Nvidia still does well. Not to mention, that the 7850 has 1gib more than the 560 Ti and the reference board only performed marginly better than the referrence 560 Ti.  Now what's the big deal with Dx9? Not much really except the actually the most demading game for video rendering and drawing right now is actually SC II in 4v4 mode which is Dx9 and the 560 ti still killed the 7850 by 20FPS.  So AMD is releaseing a barely better than version that does worse on the old stuff ? noooicccce. Again, no fear whatsoever that Nvidia is going to knock AMD back in the dirt!
> 
> ...



if fermi is the topic then gtx 480 is very much relevant since it IS fermi LOL.
vliw has the edge in graphical tasks, Fermi has more compute capabilities, AMD. This round improved compute while keeping the edge in graphical tasks, that's what and does, as for copying, um NVIDIA was all about big cores with hot clocks, now I can say the same thing that NVIDIA copied and by dropping not clocks and fitting more cores but I'm not gonna get down to that level, idk what on earth males GCN anything like Fermi lmao,like its a freaking 2000core chip, NVIDIA never built an architecture like that until now sorta. As for dx9 one game doesn't tell it all, much more factors might be the cause but it doesn't matter really  so I don't think I need to bring this point
As for bulldozer well AMD had yield issues which were apperant with llano too, NVIDIA had that issue with fermi, it almost always happens when moving to a new node, bulldozer is a revolutionary core in theory, AMD. Just failed to deliver this time around same way fermi failed with gtx400, this is y I mentioned it, now if ur opinion sais otherwise then good for u, cuz I simply happen to disagree as I've done enough research on the matter


----------



## Steevo (Mar 19, 2012)

VLIW raped Nvidias designs for years.

Nvidia came out with a idea and design that was late, hot and badly implemented, they refined it and won.

AMD was building the same for many years, a GPU doesn't happen in a few months time, they saw the writing on the wall that computing was becoming more generalized and with the purchase of ATI they managed to get a very very good foothold in this area. 

Nvidia is countering their early inefficiency with experience brought from their mobile departments. A smaller more efficient cip that uses tech brought from more power efficient designed Tegra chips. 


Bulldozer belongs nowhere in this discussion. Back on topic.


----------



## Aquinus (Mar 19, 2012)

I'm trying to understand people when they say that the new GTX 680 isn't supposed to be their fsater model for Kepler. If that is so, why is its product placement so high. The 690 is still supposed to be nVidia's dual GPU solution, right? That doesn't give a whole lot of room to put a successor in this generation. Also who cares if about nVidia vs AMD, when the facts come push to shove, the 7970 has been out for how many months and how many users have bought 7000-series cards? That is the real point, Kepler was coming so late that something had to be released. It sounds like the bulldozer that didn't completely flop (rather just lose ground in comparison to the headroom previous top-end models had.)


----------



## RigRebel (Mar 19, 2012)

sergionography said:


> if fermi is the topic then gtx 480 is very much relevant since it IS fermi LOL.
> vliw has the edge in graphical tasks, Fermi has more compute capabilities, AMD. This round improved compute while keeping the edge in graphical tasks, that's what and does, as for copying, um NVIDIA was all about big cores with hot clocks, now I can say the same thing that NVIDIA copied and by dropping not clocks and fitting more cores but I'm not gonna get down to that level, idk what on earth males GCN anything like Fermi lmao,like its a freaking 2000core chip, NVIDIA never built an architecture like that until now sorta. As for dx9 one game doesn't tell it all, much more factors might be the cause but it doesn't matter really  so I don't think I need to bring this point
> As for bulldozer well AMD had yield issues which were apperant with llano too, NVIDIA had that issue with fermi, it almost always happens when moving to a new node, bulldozer is a revolutionary core in theory, AMD. Just failed to deliver this time around same way fermi failed with gtx400, this is y I mentioned it, now if ur opinion sais otherwise then good for u, cuz I simply happen to disagree as I've done enough research on the matter



GTX 480 does not have any relavance because 1. You're just trying desperately to use it in context as a past example how Nvidia can fail when NVIDIA has already GONE PAST the GTX 480 and created the 580 which was every bit a success! You're orginal post was based on the fact that we should hold our breath because what about the GTX 480 and GTX 470 but that was 2 generations ago and you're just looking for something to hold on to and make yourself look smart when actually you can't even stay on FING target! YOu're pulling the past out your ass that's already been the PAST and trying to equate that to Nvidia possibly failing this time when THEY HAVE ALREADY SUCCEEDED IN PASSING THE GTX 480 you retard!! thus the GTX480 = NOT RELAVENT ... get it now genius ? 2+2 stay on target !


----------



## erocker (Mar 19, 2012)

This thread has nothing to do with GTX 480, Fermi, etc. Old news, get over it. Stay on topic.


----------



## erocker (Mar 19, 2012)

Thread cleaned of off topic posts, after my warning. I won't ask again.


----------



## phanbuey (Mar 19, 2012)

Steevo said:


> VLIW raped Nvidias designs for years.
> 
> Nvidia came out with a idea and design that was late, hot and badly implemented, they refined it and won.
> 
> ...



Yes you're right... the 2900xt, then the 3870 that got its ass kicked, then the 4870 which was good for the price but was always second best.... oh yeah... the 6xxx series, which is also second best.

the only time nvidia's design "lost" was Fermi vs the 5xxx series, but then they still had performance crown, just too late to market.  That's one design out of 5.

Where are you getting you info that this is tegra tech?


----------



## RigRebel (Mar 19, 2012)

Is the launch date for Kepler still 3/22 ? ... 



Steevo said:


> VLIW raped Nvidias designs for years.
> 
> Nvidia came out with a idea and design that was late, hot and badly implemented, they refined it and won.
> 
> ...



Sergi started with citing BDozer I just rebuttled about it lol. 



phanbuey said:


> Yes you're right... the 2900xt, then the 3870 that got its ass kicked, then the 4870 which was good for the price but was always second best.... oh yeah... the 6xxx series, which is also second best.
> 
> the only time nvidia's design "lost" was Fermi vs the 5xxx series, but then they still had performance crown, just too late to market.  That's one design out of 5.
> 
> Where are you getting you info that this is tegra tech?




Ditto ... show sources for kepler bought on Tegra tech ? or is that just speculation ?



Aquinus said:


> I'm trying to understand people when they say that the new GTX 680 isn't supposed to be their fsater model for Kepler. If that is so, why is its product placement so high. The 690 is still supposed to be nVidia's dual GPU solution, right? That doesn't give a whole lot of room to put a successor in this generation. Also who cares if about nVidia vs AMD, when the facts come push to shove, the 7970 has been out for how many months and how many users have bought 7000-series cards? That is the real point, Kepler was coming so late that something had to be released. It sounds like the bulldozer that didn't completely flop (rather just lose ground in comparison to the headroom previous top-end models had.)



Where are you getting your info? It's been barely 2months and 10 days...I'd hardly call that "so late". And as for how many people have bought, idk, I don't have the fiscial statments for 7970 purchases in the last two months... do you ? lol... 

 Actually 2.3333 months is hardly enought time to rest on. Kepler is presumably 3 days away. If Kepler owns then 2 months 10 days is barely on top long enough to call a victory for the 7970. It's more like *treading water till the sharks arrive!* lol  Especially, since it takes way longer than 2months and 13 days to produce a series which means Nvidia has been working on 700s for a while and is only "fashionably" late... A.K.A while AMD is stuffing orderves and drinking champaigne in hollow victory... Nvidia's gonna show up via VIP private entrance and F$#! the Prom queen... lol -pimpslap- 
ps - perhaps you were thinking 6970 ?


----------



## erocker (Mar 19, 2012)

@RigRebel.. Stop double/triple posting. Use the edit button to add to your posts or use the multi-quote button to quote multiple posts.


----------



## xenocide (Mar 19, 2012)

Aquinus said:


> I'm trying to understand people when they say that the new GTX 680 isn't supposed to be their fsater model for Kepler. If that is so, why is its product placement so high. The 690 is still supposed to be nVidia's dual GPU solution, right?



It's not because of the model number, it's because of the chip number (GK104).  When Fermi was in development, it was GF104/GF114 (GTX460/560) for the Mid-Range model, and GF100/GF110 (GTX480/580) for the highest end model.  Nvidia has used a similar naming scheme for their chips for going on a decade now, maybe longer.

This card is listed as GK104, which means it was originally designed to replace cards like the GTX460/560/560Ti.  There was information suggesting there was a GK100 and even a GK110 in development, but that disappeared.  Couple this with Nvidia PR people and CEO's saying they expected more from AMD, and it would appear that Nvidia just took their intended mid-range offering, tweaked it, and relabeled it a GTX680.

Just look at the specs for the GTX680 that we know, 2GB VRAM, 256-bit memory bus, and GK104 chip.  It has all the markings of what should have been a GTX660.  The actual model number is second to the chip


----------



## RigRebel (Mar 20, 2012)

erocker said:


> @RigRebel.. Stop double/triple posting. Use the edit button to add to your posts or use the multi-quote button to quote multiple posts.


Sorry  done..  Still getting the hang of this site.


----------



## Aquinus (Mar 20, 2012)

xenocide said:


> It's not because of the model number, it's because of the chip number (GK104).  When Fermi was in development, it was GF104/GF114 (GTX460/560) for the Mid-Range model, and GF100/GF110 (GTX480/580) for the highest end model.  Nvidia has used a similar naming scheme for their chips for going on a decade now, maybe longer.
> 
> This card is listed as GK104, which means it was originally designed to replace cards like the GTX460/560/560Ti.  There was information suggesting there was a GK100 and even a GK110 in development, but that disappeared.  Couple this with Nvidia PR people and CEO's saying they expected more from AMD, and it would appear that Nvidia just took their intended mid-range offering, tweaked it, and relabeled it a GTX680.
> 
> Just look at the specs for the GTX680 that we know, 2GB VRAM, 256-bit memory bus, and GK104 chip.  It has all the markings of what should have been a GTX660.  The actual model number is second to the chip



But who cares what the hardware model itself is? They placed it to compete directly with the 7970 just by naming it the GTX 680, so it's the top for Kepler unless nVidia is changing their naming scheme. If there is going to be this said second chip, then why wasn't this placed as the 670? You see my point? It may be quick but nVidia is making it look like they won't have it ready for this GPU line-up.



RigRebel said:


> Where are you getting your info? It's been barely 2months and 10 days...I'd hardly call that "so late". And as for how many people have bought, idk, I don't have the fiscial statments for 7970 purchases in the last two months... do you ? lol...



What proof do I need? Look around and look at all the people with 7970s, and the simple fact that the 7970 has been out for two months and Kepler is nowhere to be seen. You can't say that doesn't benefit AMD.


----------



## RigRebel (Mar 20, 2012)

Aquinus said:


> But who cares what the hardware model itself is? They placed it to compete directly with the 7970 just by naming it the GTX 680, so it's the top for Kepler unless nVidia is changing their naming scheme. If there is going to be this said second chip, then why wasn't this placed as the 670? You see my point? It may be quick but nVidia is making it look like they won't have it ready for this GPU line-up.
> 
> 
> 
> What proof do I need? Look around and look at all the people with 7970s, and the simple fact that the 7970 has been out for two months and Kepler is nowhere to be seen. You can't say that doesn't benefit AMD.



Look all around? lol so the little old lady picking her nose at the toll both has a 7970? If I look all around right now I just see my dog and I'm pretty sure he's not rocking out a 7970.. lol I think you're confusing your opinion and conjeture (however likely it may be) with data...poop some data please. and again I must educate an AMD fanboy... I'll provide a link later today which states it's more like the 660Ti will perform as the 580 did. More on that later link is on another PCs browser. peace.


----------



## Aquinus (Mar 20, 2012)

RigRebel said:


> Look all around? lol so the little old lady picking her nose at the toll both has a 7970? If I look all around right now I just see my dog and I'm pretty sure he's not rocking out a 7970..



Saying this is necessary why? Neither of these will buy a nVidia card either...



RigRebel said:


> lol I think you're confusing your opinion and conjeture (however likely it may be) with data...poop some data please. and again I must educate an AMD fanboy... I'll provide a link later today which states it's more like the 660Ti will perform as the 580 did. More on that later link is on another PCs browser. peace.



That is kind of insulting... I'm an AMD fan boy because I own a Sandy Bridge-E? Because I've used nVidia cards in the past? Also what is with using words like "poop"? You couldn't think of a more mature and intellectual word to use? Kepler can't make money if it hasn't been released, the 7970 can make money because it is actually on the market and people are buying it, it has only been on the market for 2 months without a real competitor. I don't need numbers to show that you can't buy a product that hasn't been released yet, that is common sense and if you need numbers to be convinced of that, you should stop talking right now.

I would appreciate some maturity and some logical reasoning if you're going to start calling me an "AMD fanboy" which in itself is just being used to slur and discredit what I have to say which is insulting and uncalled for.

Also I'm a System Admin and I have a degree in computer science, what are you doing? Don't pander to me and try to tell me what I know and who I am.

Edit: As you're a new user, I highly recommend that you read the rules.


----------



## Tatty_One (Mar 20, 2012)

The name calling and accusation throwing is starting to become tiresome, carry out your discussions/disagreements in a civil manner or I will start dishing out the points, a little bit of maturity goes a long way!


----------



## TheMailMan78 (Mar 20, 2012)

Aquinus said:


> Also I'm a System Admin and I have a degree in computer science.


 I'm an administrator also.....an admin of WINNING! And I have a PHD also.......Pimpin Hoes Degree.

Listen man I learned along time ago not to argue with newer members as it spins out of control fast as they are not familiar with the culture of TPU. They have "teething" issues for a lack of a better term. Relax man. Don't take it personal. I see where you are coming from with the market placement and I agree. However its all up in the air until we see some real benches from W1zz.

Edit: Just saw your a new member also. lol.


----------



## jaredpace (Mar 20, 2012)

Hahah, TPU is the coolest by far.


----------



## Aquinus (Mar 20, 2012)

TheMailMan78 said:


> However its all up in the air until we see some real benches from W1zz.



People don't seem to realize this... and I didn't take it personal, it was to make a point that you shouldn't insult people. It was a "you can't get away with that kind of behavior" post. Also I do get defensive when people call me out, it's a natural tendency.


----------



## TheMailMan78 (Mar 20, 2012)

Aquinus said:


> People don't seem to realize this... and I didn't take it personal, it was to make a point that you shouldn't insult people. It was a "you can't get away with that kind of behavior" post. Also I do get defensive when people call me out, it's a natural tendency.



People call me out all the time. You have to consider the source. Personally I believe its their right to be wrong......and for me to laugh at them and poke fun at them without them realizing it. Remember this is just the internet and TPU mods do thier job well. Anyway....ALEMAN FORWARD TO TOPIC!


----------



## Black Panther (Mar 20, 2012)

Back on topic please or infractions will have to flow...


----------



## m1dg3t (Mar 20, 2012)

Awwww BP  Both my post's were on topic


----------



## RigRebel (Mar 20, 2012)

m1dg3t said:


> Awwww BP  Both my post's were on topic



lol maybe in lala land...
ps.. answer my post from like 2 pages ago (only yesterday) please  .. is Kepler still on for 3/22 since you hold all the answer oh enlighted one? lol




                                                                     ------------------------------------------------
                                                               Friends don't let friends post old Motorhead pictures


----------



## Damn_Smooth (Mar 20, 2012)

RigRebel said:


> lol maybe in lala land...
> ps.. answer my post from like 2 pages ago (only yesterday) please  .. is Kepler still on for 3/22 since you hold all the answer oh enlighted one? lol
> 
> 
> ...



Kepler is the cake.

Doesn't matter much anyway, console ports aren't going to get much more demanding any time soon.


----------



## RigRebel (Mar 20, 2012)

Damn_Smooth said:


> Kepler is the cake.
> 
> Doesn't matter much anyway, console ports aren't going to get much more demanding any time soon.



who's talking console.. wanting kepler for PC.. AMD has the PS4 contracts for consoles from what another poster was saying. Kepler for Xbox ?


----------



## Damn_Smooth (Mar 20, 2012)

RigRebel said:


> who's talking console.. wanting kepler for PC.. AMD has the PS4 contracts for consoles from what another poster was saying. Kepler for Xbox ?



I believe that AMD has the contract for Xbox too. 

I want Kepler to start a price war, but the more I think about it the less I care. Outside of benchmarking, folding and E-peen, there really isn't that much need for that much power.

As long as the current gen consoles are holding PC back, Kepler could be 10 million times more powerful than GCN and nobody would really notice.


----------



## RigRebel (Mar 20, 2012)

Damn_Smooth said:


> I believe that AMD has the contract for Xbox too.
> 
> I want Kepler to start a price war, but the more I think about it the less I care. Outside of benchmarking, folding and E-peen, there really isn't that much need for that much power.
> 
> As long as the current gen consoles are holding PC back, Kepler could be 10 million times more powerful than GCN and nobody would really notice.



Ditto on the price war.. come on $125.00 560 Ti's  ! lol 
this is the link that through other posts led me to this forum > http://lenzfire.com/2012/02/entire-nvidia-kepler-series-specifications-price-release-date-43823/ 

Now, who know those number could be all speculation, what's important to me is the 500 to 600 series translations. Is the kepler DK104 despite being the 560 chip gonna to be upgraded enought to make the 660Ti perform on par with previous GTX580s, the GTX 660 = the 570 and the 650 Ti = the 560 ti, etc. etc. etc... Prices for the 660Ti seem higher so is the preposed kepler DK104 660Ti jumping up to fill the shoes of the old GTX580?  That's what i'm looking for...seems  logical.  And notice the GTX 650 Ti? Apparently this sees the 650ti/aka 550ti stepping up and taking the 560Ti's place with the possible 650Ti =560 ti preformance and 560ti price point. Some people are spectulating this is not true data but seems logical. Confirm/deny please. These figures seem to indicate a total upward shift in all products in Nvidia's proposed 600/700 series (still not sure if it's gonna be 600/700 sources vary). The upward shift of all products would seriously rock if legit technology and benches. I'd love to buy a brand new 660Ti with better TPU numbers and lower heat that can perform on par with the GTx 580 solo and like the 560tis in SLI and that has the new xxAA > http://www.fudzilla.com/home/item/26378-nvidia-kepler-gpus-may-introduce-new-anti-aliasing-technique   that's been speculated on. THAT's what i'm TALK'N BOUT!  

Also good point on console holding back PC but MMOs are reigning supreme on PC atm and some good ones due to release soon hopefully (Space Marines 40k Online FTW). PC/PC Gaming will always be a higher end niche product at this point because $300.00 per video card vs $300.00 for whole console with blue ray = no brainer for lower end markets and right now manufactures are pouring money into console gaming like honey over sticky buns. If the money where there PC would be on top but gaming went FROM PCs to consoles sole cause more affordable mass market avenue.


----------



## Damn_Smooth (Mar 20, 2012)

RigRebel said:


> Ditto on the price war.. come on $125.00 560 Ti's  ! lol
> this is the link that through other posts led me to this forum > http://lenzfire.com/2012/02/entire-nvidia-kepler-series-specifications-price-release-date-43823/
> 
> Now, who know those number could be all speculation, what's important to me is the 500 to 600 series translations. Is the kepler DK104 despite being the 560 chip gonna to be upgraded enought to make the 660Ti perform on par with previous GTX580s, the GTX 660 = the 570 and the 650 Ti = the 560 ti, etc. etc. etc... Prices for the 660Ti seem higher so is the preposed kepler DK104 660Ti jumping up to fill the shoes of the old GTX580?  That's what i'm looking for...seems  logical.  And notice the GTX 650 Ti? Apparently this sees the 650ti/aka 550ti stepping up and taking the 560Ti's place with the possible 650Ti =560 ti preformance and 560ti price point. Some people are spectulating this is not true data but seems logical. Confirm/deny please. These figures seem to indicate a total upward shift in all products in Nvidia's proposed 600/700 series (still not sure if it's gonna be 600/700 sources vary). The upward shift of all products would seriously rock if legit technology and benches. I'd love to buy a brand new 660Ti with better TPU numbers and lower heat that can perform on par with the GTx 580 solo and like the 560tis in SLI and that has the new xxAA > http://www.fudzilla.com/home/item/26378-nvidia-kepler-gpus-may-introduce-new-anti-aliasing-technique   that's been speculated on. THAT's what i'm TALK'N BOUT!
> ...



Thanks, I hadn't read that lenzfire article.

I can't wait for the Kepler reviews. I'm really more interested in the OC capabilities. 

If it can clock as well as the 7000 series, it looks like Nvidia has a real winner on it's hands. If not, it should start a price war just by beating the 7970 at stock.

Either way, it looks like the next couple months are going to be interesting.

Welcome to TPU, by the way.


----------



## RigRebel (Mar 20, 2012)

Damn_Smooth said:


> Thanks, I hadn't read that lenzfire article.
> 
> I can't wait for the Kepler reviews. I'm really more interested in the OC capabilities.
> 
> ...



I haven't read a overclock review on the 7000 series yet other than the one for the Saphire card http://www.hardwareheaven.com/revie...dition-graphics-card-review-introduction.html and I was impressed but also bewildered to see such a jump from the referrence review on Tom's. http://www.tomshardware.com/reviews/radeon-hd-7870-review-benchmark,3148.html

More on the clocking aspects? If it OC's that well with that jump in performance that would great. Especially if Nvidia 600s follow suite   PS... thanks for the welcome loving TPU atm I'm learning a lot and getting good info from a lot of the posters. And bonus =I haven't been kicked "yet" lol 

PS please excuse me from being dense or missing the lingo but yes or no is kepler still coming out on 3/22/2012 ?
PSS> correction :  660Ti perform on par with previous GTX580s, the GTX 660 = the 570 and the 650 Ti = the 560 ti, etc. etc. etc...=
660TI perform on 10% better than 7950 per that link the rest is in the link... seems a stretch but if they pull it off holy moley. http://lenzfire.com/2012/02/entire-n...se-date-43823/  I know the links from Feb but still may be viable cause even the post of specs in this forum is speculative.. nothings final till release.


----------



## Damn_Smooth (Mar 20, 2012)

RigRebel said:


> I haven't read a overclock review on the 7000 series yet other than the one for the Saphire card http://www.hardwareheaven.com/revie...dition-graphics-card-review-introduction.html and I was impressed but also bewildered to see such a jump from the referrence review on Tom's. http://www.tomshardware.com/reviews/radeon-hd-7870-review-benchmark,3148.html
> 
> More on the clocking aspects? If it OC's that well with that jump in performance that would great. Especially if Nvidia 600s follow suite   PS... thanks for the welcome loving TPU atm I'm learning a lot and getting good info from a lot of the posters. And bonus =I haven't been kicked "yet" lol
> 
> ...



The last that I heard it was still supposed to launch on the 22nd. I still haven't heard anything that says different so I'm assuming it's true.

I guess that we'll know for sure if we get a flood of reviews around midnight tomorrow night.


----------



## RigRebel (Mar 22, 2012)

Damn_Smooth said:


> The last that I heard it was still supposed to launch on the 22nd. I still haven't heard anything that says different so I'm assuming it's true.
> 
> I guess that we'll know for sure if we get a flood of reviews around midnight tomorrow night.



I hope so... a tech friend of mine at CompUSA told me on Tues night it's 5/22... idk. he's wrong a lot I thought all the posts said 3/22 but he brings up a good point that no orders came to his store yet.. :/  

Gosh I hope it's 3/22... gives me time to see if the 650Ti is going to perform like a 570 (or if the 660ti is going to perform like a 580 or higher  ) before D3 launch in May. 

PSS new update.. new leak 7 hours old!! Some of it's old news though... I know this may be a double post and I'm sorry but since it's new reviews only 7 hours old I didn't want them to get lost in previous post and not have them seen. 
Forgive me please :beg: 

http://www.fudzilla.com/home/item/26459-geforce-kepler-gk110-basic-specs-leaked


----------



## RigRebel (Mar 22, 2012)

Damn_Smooth said:


> The last that I heard it was still supposed to launch on the 22nd. I still haven't heard anything that says different so I'm assuming it's true.
> 
> I guess that we'll know for sure if we get a flood of reviews around midnight tomorrow night.



Well, Nvidia released it's specs and numbers... and at first glance. YES! ... Gboost (as speculated), TxAA (as speculated) and dynamic non-precomputed physix all with lower TPD and 4 monitor Support from a single card with 3D/2D surround. Loving it so far.. Still checking all specs but wanted to get you the link quickly> http://www.geforce.com/hardware/desk...gtx-680/videos

TXAA looks freaking awesome.. hope it's as smooth as he says with minimal FPS loss. This could be an awesome bonus new AA tech and aspect of the card!

Full Card specs > http://www.geforce.com/hardware/desk...specifications
@ 8% increase over GTX 580.. not a huge difference but all and all a lot of refinements and hopefully an all around better card.


----------



## Tatty_One (Mar 22, 2012)

I cannot believe it's only 8%, does that not mean that it is in fact no faster than the 7970??


----------



## RevengE (Mar 22, 2012)

I was looking at these cards. I just sold my 6950. Looking for a new card, these sparked my interest. if they're slower than a 7970 I may just get one of those, Seeing that my computer is down until I get a video card. I was hoping to give Nvidia a shot this time.


----------

