Wednesday, March 21st 2012

GK110 Specifications Approximated

Even as launch of the GK104-based GeForce GTX 680 nears, it's clear that it is emerging that it is not the fastest graphics processor in the GeForce Kepler family, if you sift through the specifications of the GK110 (yes, 110, not 100). Apparently, since GK104 meets or even exceeds the performance expectations of NVIDIA, the large-monolithic chip planned for this series, is likely codenamed GK110, and it's possible that it could get a GeForce GTX 700 series label.

3DCenter.org approximated the die size of the GK110 to be around 550 mm², 87% larger than that of the GK104. Since the chip is based on the 28 nm fab process, this also translates to a large increment in transistor count, up to 6 billion. The shader compute power is up by just around 30%, because the CUDA core count isn't a large increment (2000~2500 cores). The SMX (streaming multiprocessor 10) design could also face some changes. NVIDIA could prioritize beefing up other components than the CUDA cores, which could result in things such as a 512-bit wide GDDR5 memory interface. The maximum power consumption is estimated to be around 250~300 Watts. Its launch cannot be expected before August, 2012.
Source: 3DCenter.org
Add your own comment

34 Comments on GK110 Specifications Approximated

#1
krisna159
WOW!!! it surprising me... :eek::eek::eek::eek:
Posted on Reply
#2
erocker
*
The GTX 680 that should of been. Nvidia could of sold GK104's to everyone and their mother if it was priced at the mid-range level.
Posted on Reply
#3
semantics
erockerThe GTX 680 that should of been. Nvidia could of sold GK104's to everyone and their mother if it was priced at the mid-range level.
:D could have had another 8800GT where only a fanboy would buy anything else.
Posted on Reply
#4
punani
:twitch:

AMD, please kick Nvidias ass so they will be forced to release this soon!? :nutkick:
Posted on Reply
#5
amdftw
This is fake!
400-450mm2 is the max size that worth it to make.
Or if NV want as huge 550mm2 gpus, one gpu cost will be 500-1000USD, not the card, the card will be around 1500USD, because only few gpus will be good in one wafer.
Posted on Reply
#6
NC37
NV couldn't say no to the classic monolithic ways. Course it makes sense. If you are gonna build a Death Star, you want it to be big and imposing...moon size! You want people to flee in terror in the face of your giant ball of death! Sure a fleet of Star Destroyer size balls coming at you will accomplish the same task...but...not very scary is it?
Posted on Reply
#7
DarkOCean
amdftwThis is fake!
400-450mm2 is the max size that worth it to make.
Or if NV want as huge 550mm2 gpus, one gpu cost will be 500-1000USD, not the card, the card will be around 1500USD, because only few gpus will be good in one wafer.
If this would be just gf 104 +50% shaders rops an everything 450mm would be possible but they want to make this gpu more for gpgpu so they need to beef those parts up and so it will get bigger than that.
Till the time they want to release this card the 28nm yields would have improved a lot, they wont make the same mistake they did with 40nm - release the monolithic gpu first with very bad yields, poor power/performance and costs .
Posted on Reply
#8
Huddo93
I am going to be amazed if this ends up to be close to the true specs of the GK110. It will be a absolute monster! To bad ill probably spend my money on a GTX680 before it even gets close to launch! Cannot imagine the price for this monstrosity either, going to put a dent in any computer enthusiasts pocket :O and a side note: Could be the reason some of the brands at CES had power supplies going above and beyond 1.5k+ Watts, really would be necessary if you were looking at Quad SLI :motherofgod:
Posted on Reply
#10
HumanSmoke
erockerThe GTX 680 that should of been
Probably....but then, if GK104 was released as GTX 660 it becomes a double edged sword- while Nvidia might accrue some kudos/hate (delete as applicable) for fielding a second-tier named card to do battle with AMD's single-GPU top dog, it might come as cold comfort to the Nvidia BoD and shareholders if it was priced as such.
erockerNvidia could of sold GK104's to everyone and their mother if it was priced at the mid-range level.
Agreed. Pricing at $300-350 would pretty much destroy AMD product stack (assuming the salvage part(s) price accordingly-and force AMD into price parity -bit of a lose/lose scenario I would have thought, even if a definite win for the consumer)...although the elephant in the room is Nvidia can only sell what's being produced. If production is constrained then it's probably a given that Nvidia sells what they make in any case- the lower pricing would just ensure that the cards are perpetually out of stock/on back order.

As for GK110. I'd go out on a limb (not really) and say that 384-bit memory bus is the minimum with 448 or 512 not out of the question. GK104 probably isn't anywhere close to where Nvidia want to be regarding bandwidth and double precision (esp as Tesla/Quadro wont be clocked anywhere close to GeForce if history is any indication) in the HPC and workstation/pro graphics area's. Adding a larger memory controller and compute functionality in addition to an increased shader count is definitly going to balloon out the size of the die. Anyone know TSMC's max reticle size ? What was GT200 - 576mm² ?
amdftwThis is fake! 400-450mm2 is the max size that worth it to make
GT200, GT200b, GF110, GF100 and G80 would beg to differ. From 90nm to 40nm, Nvidia's big die increased from 484mm² to 520mm². If Nvidia have a proven track record in anything, it's that they have no qualms about using as much silicon real estate as is needed to include the functionality that they want.
amdftwOr if NV want as huge 550mm2 gpus, one gpu cost will be 500-1000USD, not the card, the card will be around 1500USD
Unlikely IMO. Can't see GK110 beating an HD7990 or GTX680 duallie in performance, and I could well see both those cards at a $899-999 price tag unless the single GPU cards pricing takes a nosedive. As for GPU pricing. Even if a 28nm wafer was $10,000 (and it's probably a lot less) you are estimating that TSMC and Nvidia could squeeze out only 10-20 functional GPU's ?
amdftwbecause only few gpus will be good in one wafer.
And you job title at TSMC is ? Do TSMC have a VP of Trolling ?
Posted on Reply
#11
bogami
YES ! :D i want this to come on market soon.So GK110 will be the name for best gpu processor in kepler line-up . 550 mm² is a lot of space and it will be hot on high clocks.We can espect 4 more clusters with 192 cores(2304) .512 bit wide GDDR5 memory interface.This is only guessing.
Posted on Reply
#12
Andrei23
If they will actually release this then I might just get rid of my central heating and save me some money.
Posted on Reply
#14
Hayder_Master
i hope it not will be just news and titles, it should this one be GTX680, anyway NP if they release it soon so they plant it when ATI 8000 come out, it will kick ass card with this specification but better they should do what they say.
Posted on Reply
#15
thematrix606
Hayder_Masteri hope it not will be just news and titles, it should this one be GTX680, anyway NP if they release it soon so they plant it when ATI 8000 come out, it will kick ass card with this specification but better they should do what they say.
I don't see ATI's HD8xxx series coming out any time soon, probably 6-8months minimum. Which is a pity, of course.
Posted on Reply
#16
Benetanegia
I don't think it will be 512 bit, but 384 bit. Also considering the tremendous die size difference 2048 SPs seems out of the question, too low. 33% increase in SP vs 84% increase in size, while Fermi was same 33% increase in SP but only 45% in size.

Since the SP per SMX is probably going to be lower than on GK104, following Fermi's tradition, personally I have two posibilities in my mind:

1) 8 GPC, 2 SMX per GPC, 160 SPs per SMX (10 SIMD lanes), 2560 SPs total, 128 TMU, 48 ROP, 384 bit.

2) 6 GPC, 3 SMX per GPC, 128 SPs per SMX (8 SIMD), 2304 SPs total, 144 TMU, 48 ROP, 384 bit.

I like the first one more than the second one, because appart from a higher number of SPs that would justify the much bigger die and transistor count, it also ensures better Geometry performance for Quadro line, and mimics Fermi on that it doubles GPCs while keeping the number of SM the same, which would help in compute tasks too.

Of course there are countless of combinations that would be posible, but those are the ones that make more sense to me, all things considered.
Posted on Reply
#17
jaydeejohn
Shaders dont take up alot of space in themselves
Its all the other things, which the 104 seems to have little of
Posted on Reply
#19
Shihab
amdftwThis is fake!
400-450mm2 is the max size that worth it to make.
Or if NV want as huge 550mm2 gpus, one gpu cost will be 500-1000USD, not the card, the card will be around 1500USD, because only few gpus will be good in one wafer.
Funny, me ol' 580's GF110 die size is 520mm^2 and costed a bit less than 400GBP !!
Damn, I must've ripped off the bastards big time !:rolleyes:
Posted on Reply
#20
wolf
Better Than Native
I think if they just beef up the shaders by 25%-40% but up the ROPS and memory bus by 50-100% the card should stomp GK104 in a pure gaming respect.

basically along the lines of what Benetanegia is saying, which generally makes a lot of sense, at least in my mind.
Posted on Reply
#21
Benetanegia
jaydeejohnShaders dont take up alot of space in themselves
Its all the other things, which the 104 seems to have little of
Disclaimer: This post is not only a reply to yours, I'm trying to expand my reasoning of why I think as little as 2048 SPs make very little sense in a die almost twice as big.

Acording to die shots it looks like 50% of die is shaders. Hard to tell tho and I have not measured it "scientifically".



In any case, less shader space only gives strenght to my point. Remember that GK110 is more GPGPU oriented so I'd say that number crunching shaders are relevant, more so when die area is not going to increase a lot by adding them. What would be the purpose of increasing ROPs, TMUs, and other units beyond the increase in shader units? My example #1, is a 66% increase in shaders, a 50% increase in ROP/MC and 0% TMU (Fermi did well with 64), for a total increase in transistors and die size of 80%. So that gives ample die and transistor count for increased register and caches which is what GK104 is lacking. Second example is a net 50% for everything, with even more space for cache and RF, but IMO a less likely scenario. Just speculating anyway.

I fail to see why would they increase shaders by only 30% and memory/ROP by 100%, when that is not going to increase GPGPU nor gaming performance, it's a waste. Fermi had only a 30% increase in shaders and 50% in ROP/memory and 43% in die size. The shaders only increased in 30% because they didn't have many options, SMs could only go from 48 SP down to 32 SP and they had a single dual issue dispatcher, so that really limited the number of total SPs IMO, do your own calculations about which other options they had, I tell you right now: not many. With Kepler's SMX I see no reason for not going with a larger amount. Due to how they included many dispachers in the SMX's I think that 5 SIMD pairs is more than reasonable, as per my example number one. At least 4 pairs is a given as per example 2.
Posted on Reply
#23
NHKS
@Benetanegia, i am inclined to agree with ur point on higher SPs(cuda cores) as has been the defining characteristic of Kepler architecture and the scaling of features from GK104 to GK110 will more or less be linear..
384-bit.. hmm.. going by the presumption that GK110 will also be the 'base' for next gen Quadro cards(high-end of course), i believe the next gen quadros/geforces will feature even higher memories & higher bandwidths which might mean 512-bit.. but i could be wrong.. only time will tell
Posted on Reply
#24
Benetanegia
NHKS@Benetanegia, i am inclined to agree with ur point on higher SPs(cuda cores) as has been the defining characteristic of Kepler architecture and the scaling of features from GK104 to GK110 will more or less be linear..
384-bit.. hmm.. going by the presumption that GK110 will also be the 'base' for next gen Quadro cards(high-end of course), i believe the next gen quadros/geforces will feature even higher memories & higher bandwidths which might mean 512-bit.. but i could be wrong.. only time will tell
384 bit was more like a hunch than anything else tbh, and I was in fact thinking about benefits/negatives in consumer market rather than professional market... I was also thinking about the relation of GK104->GK110 being similar to GF104->GF100, but die size of both clearly tell a different story. GK104 is a lot smaller than GF104, so much more mainstream type than GF104 which was an upper midrange/performance chip. Thinking about GK104 as 1/2 of GK110, a 512 bit interface could make sense.

With almost 2x die size and looking at the die shot of GK104, there wouldn't be any problem to fit 512 bit MC on the borders most probably and in case they included a 512 bit interface IMO it would be clocked at 5000 Mhz or a little higher, but far from 6000 Mhz. Reasons: 1) That should make the controler itself smaller. Easily allowing for 512 bit. 2) Professional cards will not feature extra high clocks so as to increase reliability. ECC memory IS slower too. 3) Because high density and fast memory chips would cost a fortune, an innecesary fortune.

Running at 5000 Mhz with a 512 bit interface, memory BW would be 80% higher than on GK104 cards and I think that is more than enough. It would also offer a little more BW than 384 bit @ 6 Ghz, for almost no price increase, so you ended up convincing me. :)
Posted on Reply
#25
theeldest
Sounds like they're following Intel's lead.

the top chip is no longer for enthusiasts but for professional use. (All of Intel's 8-core chips go to server procs) I'm sure they'll release this baby for consumers but most chips will go to workstation cards where $2k - $5k isn't out of the question. 512 bus makes more sense in that case.

I think AMD will still go mainstream-ish with their 8000 series. We'll get a performance increase but they'll be under 400 mm^2.

I also think that nVidia had the GK100 in the development cycle but didn't want another GF100. Work longer on the development cycle so that the initial release comes off without a hitch.

If they had done a release of GK100 it'd be worse than the GTX480 was at release. But I'm sure the GK110 (GTX780?) will be problem free.
Posted on Reply
Add your own comment
Nov 20th, 2024 12:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts