# GK110 Packs 2880 CUDA Cores, 384-bit Memory Interface: Die-Shot



## btarunr (May 17, 2012)

With its competition checked thanks to good performance by its GK104 silicon, NVIDIA was bold enough to release die-shots of its GK110 silicon, which made its market entry as the Tesla K20 GPU-compute accelerator. This opened flood-gates of speculation surrounding minute details of the new chip, from various sources. We found one of these most plausible, by Beyond3D community member "fellix". The source of the image appears to have charted out component layout of the chip by some pattern recognition and educated guesswork. 

It identifies the the 7.1 billion transistor GK110 silicon to have 15 streaming multiprocessors (SMX). A little earlier this week, sources close to NVIDIA confirmed the SMX count to TechPowerUp. NVIDIA revealed that the chip will retain the SMX design of GK104, in which each of these holds 192 CUDA cores. Going by that, GK110 has a total of 2880 cores. Blocks of SMX units surround a centrally-located command processor, along with six setup pipelines, and a portion holding the ROPs and memory controllers. There are a total of six GDDR5 PHYs, which could amount to a 384-bit wide memory interface. The chip talks to the rest of the system over PCI-Express 3.0.





*View at TechPowerUp Main Site*


----------



## Hustler (May 17, 2012)

Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.


----------



## Chappy (May 17, 2012)

Any news when will I get to lay my hands on these chips? Late 2013?


----------



## hardcore_gamer (May 17, 2012)

Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.


----------



## entropy13 (May 17, 2012)

hardcore_gamer said:


> Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.



It took 3 posts.


----------



## the54thvoid (May 17, 2012)

Hustler said:


> Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.
> 
> Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.



It's a tech site.  GK110 is NOT for gaming.  It's a compute card.  This card is of interest to the scientific and HPC market - it's definitely newsworthy.

Anandtech also has a nice breakdown of K10 and K20.

http://www.anandtech.com/show/5840/...s-gk104-based-tesla-k10-gk110-based-tesla-k20

Adds this about the CUDA cores:



> GK110 SMXes will contain 192 CUDA cores (just like GK104), but deviating from GK104 they will contain 64 CUDA FP64 cores (up from 8, which combined with the much larger SMX count is what will make K20 so much more powerful at double precision math than K10


----------



## btarunr (May 17, 2012)

Chappy said:


> Any news when will I get to lay my hands on these chips? Late 2013?



Autumn-Winter. Probably as "GTX 780".


----------



## hardcore_gamer (May 17, 2012)

entropy13 said:


> It took 3 posts.



I'm having a connection error here. Wifi signal strength is very low here in my Lab


----------



## hardcore_gamer (May 17, 2012)

the54thvoid said:


> It's a tech site. GK110 is NOT for gaming. It's a compute card. This card is of interest to the scientific and HPC market - it's definitely newsworthy.



I think there is going to be a Geforce Version of this card for gaming.


----------



## the54thvoid (May 17, 2012)

hardcore_gamer said:


> I think there is going to be a Geforce Version of this card for gaming.



I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.  

Tesla cards are always clocked low for power efficiency but a fast clocked GK110 will consume quite a bit of power.  I don't know if Nvidia have any plans to release GK110 as a desktop.  Maybe there will be a revision of GK110 to GK114 for desktop as a GTX7xx card.


----------



## nikko (May 17, 2012)

2 smx x 5 gpc + 4 ddr5phy derivate easily from this one. 1920 of the new perfected Fp64 cores for a mid end card that is.

And history repeats itself like with 8800GT 128 core to GTX280 240 core separated by 9 months. 8800GT dropping to 160, 110 and $86 shortly after that, the GTX670 being comparable to 8800GT in this case.


----------



## Benetanegia (May 17, 2012)

> GK110 SMXes will contain 192 CUDA cores (just like GK104), but deviating from GK104 they will contain 64 CUDA FP64 cores (up from 8, which combined with the much larger SMX count is what will make K20 so much more powerful at double precision math than K10



Hmm 960 full-rate FP64 cores is something noteworthy, definitely.

But what I'd like to know is if the SMX is composed of 192 FP32 + 64 FP64 shaders or only 192 shaders of which 64 are DP? And is either one of those options really so much better than what they did on Fermi (for a HPC part I mean, for gaming there is no doubt)? Because 7 billion transistors is quite a lot, it would allow for a Fermi based chip with at least 1280 SPs, I'm sure. How that would translate to performance and perf/watt, that's another story, but remember that a large part of why Kepler is so much more efficient is because nvidia worked closely with TSMC from the start, something they never did for Fermi. The sheer architectural benefit on the perf/watt front is not so clear to me since I heard of such a relationship*. For GK107 the benefit is more clear, but Kepler does not seem to scale as you add SM(X)'s as well as Fermi did. Or maybe it's just GK104 that has too many, admittely it's not like we have too many chips to compare. Of course GK110 might/should use dynamic schedulers if they reall want good HPC performance in all situations and that might be the culprit of the "poor" scaling, so we'll see. And I'm just rambling so...

*Or lack of relationship with Fermi, because I admit that I used to give such collaboration between a foundry and its customers as granted, I never thought it would be something "extraordinary".



the54thvoid said:


> I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.



Remember that all Fermi chips, including low-end ones had DP capable shaders (1:4 ratio) and GF100/110 had 1:2 DP shaders. Now gaming oriented Kepler chips have a lot less DP capabilities, which does not mean that GK110 is less aimed at gaming than the entire Fermi line was. For example there's no mention of reduced amount of texture mapping units and except for the additional FP64 shaders the SMX are suposedly equal, so that means they didn't want to compromise gaming performance.


----------



## Aquinus (May 17, 2012)

hardcore_gamer said:


> I'm having a connection error here. Wifi signal strength is very low here in my Lab





hardcore_gamer said:


> I think there is going to be a Geforce Version of this card for gaming.



Confucius say user who double posts didn't read the rules.
Please don't double post, there is an edit button for a reason. Thanks.


----------



## techtard (May 17, 2012)

Hustler said:


> Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.
> 
> Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.



Gotta love angry poor people who lack reading skills. Stop being such a self entitled whiner.


----------



## Shihab (May 17, 2012)

Hustler said:


> Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.



Ahem.. 
You do realize that this is a tech site ?


----------



## Benetanegia (May 17, 2012)

They have posted the white paper:

http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

Lots of interesting stuff. After a quick look at it, it does look like everything related to scheduling and warp creation is not only back to GF100 levels, but it goes a lot further. Honestly looking at how they crammed 2880 FP32 and 960 FP64 and all the other stuff that is close to 2x that of GK104, was it really necessary to simplify/cripple GK104's GPGPU capabilities so much? Apparently not on an area efficiency basis, maybe for perf/watt? Not really if their claim of 3x perf/watt is true. Maybe it was just so that GPGPU users had an only option: GK110 based parts. Damn you nvidia.

Ok. I'll continue reading.


----------



## Completely Bonkers (May 17, 2012)

I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all *still *waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really!


----------



## SIGSEGV (May 17, 2012)

what a pity, most of peeps here still dreaming on this card will become GTX780 
nvidia clearly has split gaming cards and professional cards based.. 
so, this is tesla cards for gpu computations


----------



## Benetanegia (May 17, 2012)

SIGSEGV said:


> what a pity, most of peeps here still dreaming on this card will become GTX780



I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.



SIGSEGV said:


> nvidia clearly has split gaming cards and professional cards based..
> so, this is tesla cards for gpu computations



You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.


----------



## Prima.Vera (May 17, 2012)

Hustler said:


> Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.



Did you read the article properly or you are just enjoying trolling and being silly??! 

Those GPU's are not for "nerds jerking off" or even PC gaming, but for professional use, like CAD workstations, 3D simulations servers, etc, not for average Joe. shadedshu:shadedshu


----------



## Frick (May 17, 2012)

Completely Bonkers said:


> I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all *still *waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really!



Humour nazis? Where is humour involved?


----------



## erixx (May 17, 2012)

....in his moustache


----------



## atikkur (May 17, 2012)

ok, if it becomes tesla, just fine. but benchmarks still needed when it actually came, just to know how far their compute thing progresses. i like the idea compute for gaming though, so we can simulate everything for more lifelike graphics. just a shame nvidia who is started this, seems to back-off of their own idea, or maybe, too much ask for gaming devs to implement it? dont know. i believe this could be geforce product too,, on the next generation.


----------



## SIGSEGV (May 17, 2012)

Benetanegia said:


> I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.
> 
> 
> 
> You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.



yeah, still you can expect it to come, nothing wrong with this  
even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 , i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards.


----------



## Benetanegia (May 17, 2012)

SIGSEGV said:


> yeah, still you can expect it to come, nothing wrong with this
> even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 , i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards.



Nvidia didn't make any official statement about GTX680 even 2 weeks before it launched, same for GTX690, same for 670. What makes you think they will make an statement about a card that would be launching in 6+ months (more like 9 months)? And what makes you think that's a clear sign of Nvidia splitting their bussiness? Don't be ridiculous.


----------



## SIGSEGV (May 17, 2012)

Benetanegia said:


> Nvidia didn't make any official statement about GTX680 even 2 weeks before it launched, same for GTX690, same for 670. What makes you think they will make an statement about a card that would be launching in 6+ months (more like 9 months)? *And what makes you think that's a clear sign of Nvidia splitting their bussiness? Don't be ridiculous*.



maybe you should read the full story on fermi and kepler and previous nvidia gaming cards generations


----------



## Benetanegia (May 17, 2012)

SIGSEGV said:


> maybe you should read the full story on fermi and kepler and previous nvidia gaming cards generations



Elaborate. You make no sense. Fermi is both a compute and gaming chip and so is Kepler, GK110. There is no compute specific chip. You know that because of the presence of texture units, rops... 

I've read almost everything available about them, and GCN and previous AMD/Ati chips. So be clear about what you mean because you make zero sense.


----------



## Prima.Vera (May 17, 2012)

Common man, the chips are programmed different, for different tasks. In the past you could have reflash the bios with a similar one from the gaming card, but today is no longer possible because of big hardware differences between those.


----------



## Xzibit (May 17, 2012)

Benetanegia said:


> Elaborate. You make no sense. Fermi is both a compute and gaming chip and so is Kepler, GK110. There is no compute specific chip. You know that because of the presence of texture units, rops...
> 
> I've read almost everything available about them, and GCN and previous AMD/Ati chips. So be clear about what you mean because you make zero sense.



Well whatever they are doing their stock is still slowing down since after the announcement of the 670 where its high was 13.2 (13.6 weekend closing high). Its at 12.7/12.8 now.  There suppose to be at 16.0^  So if they have something it be a good thing to announce something to slow down the slide especially when they've been rolling out new products and their stock continues to slide.


----------



## phanbuey (May 17, 2012)

Prima.Vera said:


> Common man, the chips are programmed different, for different tasks. In the past you could have reflash the bios with a similar one from the gaming card, but today is no longer possible because of big hardware differences between those.



That is not why that is no longer possible.  They use hardware, bios, and driver locks to prevent you from doing it, but the GPU is identical.


----------



## qubit (May 17, 2012)

Shame about the 384-bit data bus. A 512-bit bus would have optimized* the design and given us an even amount of memory like on the GTX 680. Instead, the memory is gonna be lopsided like on the GTX 580. Crucially, the design ends up delivering less computing power overall when it's not optimized. Oh well, I guess building an optimized design wasn't within their transistor budget. 

*Optimized is when the design of a base 2 (binary) digital circuit fills out the binary address range ie a power of 2. All the components within the chip should follow the power of 2 to do it properly, of course. For example, that would mean using 16 SMX units with 256 CUDA cores each, etc.


----------



## largon (May 17, 2012)

^Nonsense.


----------



## qubit (May 17, 2012)

largon said:


> ^Nonsense.



You have no idea about digital design. Care to elaborate.


----------



## erocker (May 17, 2012)

qubit said:


> You have no idea about digital design. Care to elaborate.



More heat

More expensive

512 bit bus really isn't necessary at all, especially with QDR memory.

That being said, I want this card. Hopefully it comes as the 7 series and is priced right.


----------



## qubit (May 17, 2012)

erocker said:


> More heat
> 
> More expensive
> 
> ...



I know the physical practicalities unfortunately limit them and that's why I mentioned the transistor budget. Heat and power are related limitations of course. Believe me, if nvidia could put out a power of 2 design, they would.

I wish I could show you what the difference would be, but I have no practical way of demonstrating it. I guess one way to look at it is to check out the really low end cards, as they're quite often optimized in this way, because they don't take a huge transistor budget and not a lot of heat and power, either, so they can afford to do this in a physical product.


----------



## largon (May 17, 2012)

It's a shame Intel, AMD nor nV have any idea about chip building. They all make such abominations so they must be _clueless_.


----------



## D007 (May 17, 2012)

Hustler said:


> Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.
> 
> Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.



I love how easy it is for people to hate others, just because they work hard and earn money. Try it sometime, maybe you could get a 690 and not hate on people who do..
I do Autocad, bring on the cores I say..


----------



## wolf (May 17, 2012)

SIGSEGV said:


> maybe you should read the full story on fermi and kepler and previous nvidia gaming cards generations



you are saying this to one of the few members of this site who is turely versed on this subject... perhaps you should read the full story...

I definitely desire this card, but will most likely never own it  GTX570 can keep me going until the next gen consoles raise the bar I think


----------



## TheoneandonlyMrK (May 17, 2012)

imho Nv look to have leaned a little too much in the gfx direction and not enough on compute, given the future were/they are faceing thats probably not wise, this cards compute power is pwned by its render potential

and i think it telling that their K10 has two gpu's a clear cost disavantage(in manufacture) when all prior gens started with single  gpu compute cards, this chip is simply worse then the last for this purpose, they threw amd and intel a bone in this dept imho.


----------



## N3M3515 (May 17, 2012)

I don't care about this...
Release the kraken already!!.....err, i mean release the god damned GTX 660 Ti variant...
$200 and below 

@Benetanegia: i thought the superb performance per watt of gk104 was because they crippled its compute performance?



D007 said:


> maybe you could get a 690 and not hate on people who do..



I would prefer to spend my hard earned money on a GTX680 or a 1150Mhz saphire 7970, you know, all the prettyness in the world (GTX 690), won't save it from microstutter.


----------



## qubit (May 17, 2012)

largon said:


> It's a shame Intel, AMD nor nV have any idea about chip building. They all make such abominations so they must be _clueless_.



quit trolling - I never said anything of the kind. You're obviously clueless about these things.

I'm surprised you actually thanked him for that useless post, erocker.


----------



## Xzibit (May 17, 2012)

I like the who fact that people are googling the $4,000+ varient of this card when even their SLI system isnt worth that much.

Maybe wait until there is solid evidence of when Nvidia will even bother to turn this thing into a GeForce variant and how it will castrate it.

Obviously it wasnt possible hence they made GK104 and J H-H said if it was feasable to do he would do it but as it is now its not as he mention to investors.


----------



## wolf (May 17, 2012)

qubit said:


> quit trolling - I never said anything of the kind. You're obviously clueless about these things.
> 
> I'm surprised you actually thanked him for that useless post, erocker.



he's just having a crack, you know, taking the piss. I lol'd

I very much doubt he meant it in a serious way


----------



## qubit (May 17, 2012)

wolf said:


> he's just having a crack, you know, taking the piss. I lol'd
> 
> I very much doubt he meant it in a serious way



Hmmm... doesn't look it to me. I'll let him explain that to me. Anyway, never mind, it's not worth arguing about any further, especially if someone's being a plonker, lol.

Thing is, I did actually learn the basics of designing integrated digital circuits at uni many moons ago and they tought me that building them out to the full power of 2 always maximises the design and they explained exactly why. This principle remains true regardless of what process technology is used or how fancy and complicated the design is.

Unfortunately, the chip literally grows exponentially in size as you do this and the semiconductor companies like nvidia, amd and Intel know this all too well, so in a real-world device, one is always limited by things such as transistor budget, physical size, reducing clock speed (fan-out/fan-in) limitations) power and heat etc. Hence, you get these odd, lopsided designs. The 384-bit bus is just one manifestation of this necessary compromise. It's just a shame to see it, which was my point in my original post on this thread.

It's hard for me to explain why in words here the exact reasons why building an IC out to the power of 2 is optimal (and perhaps someone else can do it better) which is why I advised erocker to consider the small, low end graphics cards as an explanation because for those, the physical budget is there to build them out to the full power of 2.


----------



## Steevo (May 17, 2012)

As per my last post in the lat thread that disappeared........not that this is going to be a failure.


But a overclocked 670 with 192 less cuda cores and 4% faster base clock rate is only 1% faster than a stock 680.


Do the math, frequency counts more with this than cluster counts, they went the way they did to attain such great clocks to meet their performance needs. I see this being 70% of the speed of a 690, it just needs to be priced accordingly.


----------



## qubit (May 17, 2012)

Steevo said:


> But a overclocked 670 with 192 less cuda cores and 4% faster base clock rate is only 1% faster than a stock 680.
> 
> Do the math, frequency counts more with this than cluster counts, they went the way they did to attain such great clocks to meet their performance needs. I see this being 70% of the speed of a 690, it just needs to be priced accordingly.



The SMX clusters are twice as wide as on the GK104 among other things, so it looks like the card might eat the GTX 690 for breakfast. This AnandTech article has a nice writeup on it.


----------



## TheoneandonlyMrK (May 18, 2012)

qubit said:


> The 384-bit bus is just one manifestation of this necessary compromise. It's just a shame to see it, which was my point in my original post on this thread.



ive not read the whole thread so fogive me if i have this completely wrong, but AMD's 7970 uses a 384 bit combination of memory busses ie a iommu bus of 128 bit plus 256bit rop mem bus

could the additional 128bits of nvidias bus not be iommu too, given they are adearing to the same pciex3 spec and afaik it calls for virtualized memory support, something they both claim as doable in this gen ,and nvidia anounced VGX which surely needs iommu support??.


----------



## Jurassic1024 (May 18, 2012)

Hustler said:


> Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.
> 
> Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.



These are workstation GPU's (Tesla) that go for upwards of $4000usd, not for the nerds jerking off in basements you described.


----------



## garyab33 (May 18, 2012)

I really hope 2 of these cards is SLI will be finally able to run Metro 2033 at 1980x1080 with everything maxed out (DoF & Tessellation ON) and Crysis 1 (16AA and 16AF), also everything set to highest in NV control panel, at stable 100-120 fps. Gtx 680 is a joke, it is slightly better than gtx 580 in these 2 games. Doesnt worth the wait and price for gtx 680.


----------



## Lost Hatter (May 18, 2012)

Any Skynet is born.


----------



## Steevo (May 18, 2012)

qubit said:


> The SMX clusters are twice as wide as on the GK104 among other things, so it looks like the card might eat the GTX 690 for breakfast. This AnandTech article has a nice writeup on it.



Except games aren't double precision, and the fact is they are the only extra processing power listed in the article. 


Will it be faster at compute tasks? Absolutely. Much in the same way ATI used to have multiple shaders though it will be harder to schedule for, much like what ATI had to use drivers to do setup on for years. I think it will be interesting to see what performance is with different CPU's.


----------



## Disparia (May 18, 2012)

qubit said:


> ...
> 
> Thing is, I did actually learn the basics of designing integrated digital circuits at uni many moons ago and they tought me that building them out to the full power of 2 always maximises the design and they explained exactly why. This principle remains true regardless of what process technology is used or how fancy and complicated the design is.
> 
> ...



Hope you still have some notes from class? 

Otherwise I'm not seeing the shame in it. Before Fermi memory addressing was 32bit (storing 64bit values in multiple 32bit address - now that is far from optimal!). With 64bit addressing in Fermi and newer, seems like each access would occur on 1 (possibly 2?) of it's 6 memory controllers. There's not enough nitty gritty information in the GK110 whitepaper nor understanding on my part to say anything definitive about how their memory management works.

And in reality, suboptimal or not, I'll take 384bit over 320bit over 256bit. If they want to give me 448bit or 512bit, that would awesome too. Well, awesome in theory. I still haven't been able to work up the case for the company to buy Teslas for my servers


----------



## ypsylon (May 19, 2012)

Really funny reading some answers (over many, many forums). Boys and girls you haven't got a clue how much Tesla cards cost and at what segment are aimed.

Ask yourself: WTH would buy industrial VGA for home use with price at 4000+ USD/Euro? Nobody cares about games with Teslas. Don't compare GeForce with Tesla or Quadro. Like comparing old rusty bike to Ferrari or SLR Mercedes.


----------



## Steevo (May 19, 2012)

ypsylon said:


> Really funny reading some answers (over many, many forums). Boys and girls you haven't got a clue how much Tesla cards cost and at what segment are aimed.
> 
> Ask yourself: WTH would buy industrial VGA for home use with price at 4000+ USD/Euro? Nobody cares about games with Teslas. Don't compare GeForce with Tesla or Quadro. Like comparing old rusty bike to Ferrari or SLR Mercedes.



Thanks gramps.....all of us boys and girls who don't work and live in our parents basements and dream of making $5 a hour know nothing. We sure are glad we can jump on our rusty old bike and go play with our friends after we game on our mom's old Inspiron with a 6400 graphics card and shoot them up real good and fast with our Intel Gigahurts processor. 

Maybe you could come over and show us your fancy car and stuff, and bring us icecream and other treats huh?


----------



## qubit (May 19, 2012)

Steevo said:


> Thanks gramps.....all of us boys and girls who don't work and live in our parents basements and dream of making $5 a hour know nothing. We sure are glad we can jump on our rusty old bike and go play with our friends after we game on our mom's old Inspiron with a 6400 graphics card and shoot them up real good and fast with our Intel Gigahurts processor.
> 
> Maybe you could come over and show us your fancy car and stuff, and bring us icecream and other treats huh?


----------



## Sinzia (May 19, 2012)

Steevo said:


> Thanks gramps.....all of us boys and girls who don't work and live in our parents basements and dream of making $5 a hour know nothing. We sure are glad we can jump on our rusty old bike and go play with our friends after we game on our mom's old Inspiron with a 6400 graphics card and shoot them up real good and fast with our Intel Gigahurts processor.
> 
> Maybe you could come over and show us your fancy car and stuff, and bring us icecream and other treats huh?



Had to laugh at this one, thanks.

I'm doubtful we'll see a GK110 based gaming card this generation, seems GK110 will be the compute (read: Tesla) card.


----------



## TheoneandonlyMrK (May 19, 2012)

Sinzia said:


> I'm doubtful we'll see a GK110 based gaming card this generation, seems GK110 will be the compute (read: Tesla) card.



i dont see how you get to that result , if they are makeing GK110 then they are producing across bins ,this spec of part might not Be the highest but probably is, either way their Will be less capable chips, they'r not literall bins, the poorer cousins ,lets just call em will see light of day too , i dont see a scenario where they can Not do some kind of consumer card due to the lower binned parts normally outnumbering full spec chips

i would say this spec of GK110 chip wont hit consumer cards until the 7xx imho


----------



## radrok (May 19, 2012)

Sinzia said:


> Had to laugh at this one, thanks.
> 
> I'm doubtful we'll see a GK110 based gaming card this generation, seems GK110 will be the compute (read: Tesla) card.



I'm more inclined to think like Sinzia, such a big chip would probably be hard to use in consumer products with noticeable profits, this is ever bigger than GF100/110, if I'm not mistaken.

Anyway I think it all depends on AMD/ATI 89xx series and how early they will launch it. 
Nvidia may just continue using smaller chips on their Geforce lineup because they are less likely to encounter yield issues and huge costs linked to very big chip designs.


----------



## Steevo (May 19, 2012)

radrok said:


> I'm more inclined to think like Sinzia, such a big chip would probably be hard to use in consumer products with noticeable profits, this is ever bigger than GF100/110, if I'm not mistaken.
> 
> Anyway I think it all depends on AMD/ATI 89xx series and how early they will launch it.
> Nvidia may just continue using smaller chips on their Geforce lineup because they are less likely to encounter yield issues and huge costs linked to very big chip designs.



They seemingly learned from Fermi and the schlacking they took on it. You make significantly less is you have more die area, and that is what they are selling, a GPU die. Plus every extra mm means the dies have a higher chance of flaws, more power gates are needed or more core voltage to maintain stability at a set speed. It just makes sense to make it more efficient and achieve a higher frequency than to aim for bigger dies. The 670 is a prime example of this. That is an amazing card.


----------



## EpicShweetness (May 19, 2012)

http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

This PDF will help with the speculation's and such. Apparently there are 16 Texture Units per SMX 240 total! The L2 Cache is double so one can only speculate double the ROP's (128). The core clock and memory clock is still unknown, but the data compiling on this monstrosity is staggering! I would gladly pay $600 or $700 on a chip of this magnitude, well if I needed that much power my 7870 is amazing all by it's self.


----------



## largon (May 22, 2012)

According to the looks of GK110 die shot amount of ROPs is the same as Fermi's.


----------



## Johannesburg (May 23, 2012)

hardcore_gamer said:


> Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.



Maybe TSMC should fix their problems for themselves.


----------



## Xzibit (May 23, 2012)

Johannesburg said:


> Maybe TSMC should fix their problems for themselves.



I dont think TSMC has a problem.  They provide a service which is indemand.  Its a have your own means or rely on someone else.

Few companies take the time and money to invest to be self reliant if your not you get in line along with the rest of them.


----------



## Xzibit (May 25, 2012)

Since I havent seen any updated news.

Nvidia Investors meeting was today

The Tesla will be available in Q4 2012.  No mention of the GeForce line.


----------



## Prima.Vera (May 25, 2012)

ypsylon said:


> Really funny reading some answers (over many, many forums). Boys and girls you haven't got a clue how much Tesla cards cost and at what segment are aimed.
> 
> Ask yourself: WTH would buy industrial VGA for home use with price at 4000+ USD/Euro? Nobody cares about games with Teslas. Don't compare GeForce with Tesla or Quadro. Like comparing old rusty bike to Ferrari or SLR Mercedes.



A better comparison will be between a truck and a Dodge Viper. Both have 8 liter engines but only one can go 300Km/h while the other carry several dozen tones of cargo


----------



## techtard (May 25, 2012)

And one looks better than the other when you add a lift kit.


----------

