Monday, January 21st 2013
NVIDIA to Name GK110-based Consumer Graphics Card "GeForce Titan"
2013 started off on a rather dull note for the PC graphics industry. NVIDIA launched its game console platform "Project: Shield," while AMD rebranded its eons-old GPUs to Radeon HD 8000M series. Apparently it could all change in late-February, with the arrival of a new high-end single-GPU graphics card based on NVIDIA's GK110 silicon, the same big chip that goes into making the company's Tesla K20 compute accelerator.
NVIDIA may have drawn some flack for extending its "GTX" brand extension too far into the mainstream and entry-level segment, and wants its GK110-based card to stand out. It is reported that NVIDIA will carve out a new brand extension, the GeForce Titan. Incidentally, the current fastest supercomputer in the world bears that name (Cray Titan, located at Oak Ridge National Laboratory). The GK110 silicon physically packs 15 SMX units, totaling 2,880 CUDA cores. The chip features a 384-bit wide GDDR5 memory interface.
Source:
SweClockers
NVIDIA may have drawn some flack for extending its "GTX" brand extension too far into the mainstream and entry-level segment, and wants its GK110-based card to stand out. It is reported that NVIDIA will carve out a new brand extension, the GeForce Titan. Incidentally, the current fastest supercomputer in the world bears that name (Cray Titan, located at Oak Ridge National Laboratory). The GK110 silicon physically packs 15 SMX units, totaling 2,880 CUDA cores. The chip features a 384-bit wide GDDR5 memory interface.
203 Comments on NVIDIA to Name GK110-based Consumer Graphics Card "GeForce Titan"
www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last
we know that the compute version always clock lower than it's geforce counter part so this Geforce 'Titan' could clock higher for the final revision. but nvidia might want to keep the clock lower just like the tesla cards to keep the TDP at 235w.
As for the graphics card, sell it at US $599(not 800) or lower and this is an instant winner(for hi resolution gamers and folders alike...). I am only worried about heat like usual....
Man never went to the moon either and everything is a conspiracy. :rolleyes:
Ignore the close minded sheeple, take you about a decade to get through to them :banghead:
45% faster at 2560x1600
29% faster at 1920x1080
21% faster at 1680x1050
6% faster at 1280x800
Avg. of 25% faster.
AMD needs something 25% avg faster than 7970 Ghz Ed., to keep the marginal lead they have.
the max i expect the clock to be is around 800-900 if we are to be optimistic, 732 is the tesla version which is clocked lower because tesla have to be 24hour operation guaranteed
which geforce cards dont require such warantee.
but if 732 is the clock speed then nvidia is in for trouble, because if we were to do calculations
gtx680 has 1536cores at around 1100mhz 1536 x 1100= 1689600
gk110 has 2688 at 732mhz 2688 x 732 = 1967616
1689600/1967616 = so around .85 or 85% which is 15% extra theoretical power over gtx680 but offcourse add another 10% for the added memory bandwidth benefit. so to make a chip twice the size of gtx 680 for 15-25% extra performance is very meh, which reallly makes me doubt its a 732mhz part otherwise nvidia is much better off making a part closer to 2000 cores with the higher clockspeed advantage, thats definitly the smarter way to get 30% performance and be able to sell it at good prices.
and most likely this is what amd is doing next round, refining gcn for better efficiency to pack more cores in the same power envelope while maintaining the clock speed advantage
Agreed, clock speed sounds way too low. I think this chip has around 7b transistors in it, so I can imagine that getting a ghz out of it will be challenging. No doubt it would be more comfortable on a smaller process technology.
I love the blatant hypocrisy people exhibit online. If the guy has fans here, fair enough. You can all live in the conspiracy theory bunker and think the world is out to get you. But we don't all have to hold hands and hug. :toast:
FTR, what the does the Titan Supercomputer do? Crunch numbers for science. Using the HPC variant of the very card we are talking about. :slap:
EDIT:
Oh yeah, why don't more people come help out?
www.worldcommunitygrid.org/about_us/viewGridComputingBasics.do
www.techpowerup.com/forums/forumdisplay.php?f=68
www.techpowerup.com/forums/forumdisplay.php?f=67
Carry on:)
Edit: On topic - I will be enjoying watching some of our extreme over clockers have fun with these cards while I sit this one out. Too expensive for me.
but still at 900 dollars, . . . . . . not a chance I'm gonna buy it.
'__'
Also, I usually work these things backwards.
I personally think 7ghz/384-bit is a safe bet for 'a' gk110 card. Might not be a clock of one released, but it's a starting point of what to expect within 300w.
Working backwards from the 680, a 14 smx unit card with 384-bit at the same clocks as 680 would require that exact amount of bandwidth.
( 2688/1536 = 1.75/1.5 = 1.166_ x 6000 = 7000).
Now, the optimal (most efficient) amount of units (including sfu) in gaming for 48 ROPs is somewhere around >2800 and <2900. This would have 3136. AMD may have 2560, so that's kind of a wash.
Then it's obviously about voltage and clock potential/efficiency within a tdp (probably 300w). AMD may clock their product stock closer to nvidias max clock within the tdp, choosing rather for their max clock at 300w to be closer to the potential of the 28nm process (1200-1300mhz). nvidia's clock choice may be more power efficient, say if they scale from ~975 to 1100+mhz. AMD's choice may be more die size/cost efficient.
Point is, end of the day, if one has ~10% more usable units but more bloat, and one clocks ~10% higher, what is the better part? Does it really matter? They should be relatively close.
It isn't beyond the realms of possibility that the GeForce version has a full 15SMX. Tesla and Quadro generally have more functionality fused off then GF- presumably to fuse off the out-of-spec logic blocks and to reduce power requirement. A GeForce card probably wont be under the same restraint. Also not unheard of that TSMC's process might have improved and/or a revision from the first tranche of wafers might have taken place. If the original ~20000 GPUs going to HPC deployment are 87-93% functional then I'd assume that there must be a percentage of fully functional chips