Tuesday, October 16th 2012

Tesla K20 GPU Compute Processor Specifications Released

Specifications of NVIDIA's Tesla K20 GPU compute processor, which was launched way back in May, are finally disclosed. We've known since then that the K20 is based on NVIDIA's large GK110 GPU, a chip never used to power a GeForce graphics card, yet. Apparently, NVIDIA is leaving some room on the silicon that allows it to harvest it better. According to a specifications sheet compiled by Heise.de, Tesla K20 will feature 13 SMX units, compared to the 15 available on the GK110 silicon.

With 13 streaming multiprocessor (SMX) units, the K20 will be configured with 2,496 CUDA cores (as opposed to 2,880 physically present on the chip). The core will be clocked at 705 MHz, yielding single-precision floating point performance of 3.52 TFLOP/s, and double-precision floating point performance of 1.17 TFLOP/s. The card packs 5 GB of GDDR5 memory, with memory bandwidth of 200 GB/s. Dynamic parallelism, Hyper-Q, GPUDirect with RDMA are part of the new feature-set. The TDP of the GPU is rated at 225W, and understandably, it uses a combination of 6-pin and 8-pin PCI-Express power connectors. Built in the 28 nm process, the GK110 packs a whopping 7.1 billion transistors.
Source: Heise.de
Add your own comment

29 Comments on Tesla K20 GPU Compute Processor Specifications Released

#3
HumanSmoke
Seems like a repeat of GF100/110. Hardly surprising if the die is 500mm^2+

The first Fermi Tesla's (M2050/M2070) out of the gate were basically GTX 470 spec. M2090 released more recently is pretty much a GTX 580.

Would be interesting to know whether these Tesla's are the same SKU's that ORNL are taking delivery of, or whether they are higher spec since Oak Ridge seemed to be the high profile launch customer.
sergionographyin other words it can almost match tahiti
Any comparison probably depends on actual performance efficiencyrather than hypothetical. Unless you know what K20 brings to the table, a theoretical comparison is largely useless.

BTW: The original sitenow no longer features any specification
Posted on Reply
#4
Solaris17
Super Dainty Moderator
those cores.....my god.
Posted on Reply
#6
bogami
Estimated 20 PFOPS/s peak petaflops .!!!:eek::twitch: and3.52 TFLOP/s normal. D.P.1.17 TFLOPS/s.
Nice peak.
I wish 20 PFOPS/s on next GPU option.:D
Posted on Reply
#7
The Von Matrices
5GB of memory? That's not evenly divisible by the 384-bit memory bus it was rumored to have. Has it been reduced to 320-bit, which could produce an even 5GB?
Posted on Reply
#9
btarunr
Editor & Senior Moderator
The Von Matrices5GB of memory? That's not evenly divisible by the 384-bit memory bus it was rumored to have. Has it been reduced to 320-bit, which could produce an even 5GB?
Mix matching. Just like 2 GB is made possible on 192-bit.
Posted on Reply
#10
Prima.Vera
LOL. 7 billion transistors! I remember that my old 3dfx VooDoo 3 was having 7 million transistors and was the fastest when released. :))))
Posted on Reply
#11
The Von Matrices
btarunrMix matching. Just like 2 GB is made possible on 192-bit.
True, that is possible. But would it really be done on a high-end compute card where consistent and predictable performance is important? It would be a headache for developers to have to track which addresses they write and determine which data should go in the more or less interleaved parts of the memory space.
Posted on Reply
#12
Maban
It's probably twenty 256MB chips on a 320-bit bus.
Posted on Reply
#13
btarunr
Editor & Senior Moderator
The Von MatricesTrue, that is possible. But would it really be done on a high-end compute card where consistent and predictable performance is important? It would be a headache for developers to have to track which addresses they write and determine which data should go in the more or less interleaved parts of the memory space.
Low level video memory management is handled by API>CUDA>driver. Apps are oblivious to that. Apps are only told that there's 5 GB of memory, and to deal with it.
Posted on Reply
#14
largon
That die shot definitely has 384bits worth of memory bus...
Posted on Reply
#16
Xzibit
HumanSmokeAny comparison probably depends on actual performance efficiencyrather than hypothetical. Unless you know what K20 brings to the table, a theoretical comparison is largely useless.
Incase you didnt know Mark Harris points out he works for Nvidia.

So you might want to check who runs the sites your linking to if you want to link to un-bias information.

It be like linking to sites/blog run by AMD employees to make a point or further a view point of a AMD product.

Just silly.
Posted on Reply
#18
cadaveca
My name is Dave
woah, how'd i miss this. Thanks for bumping, Smoke!

:roll:
Posted on Reply
#19
Xzibit
HumanSmokeThe report is a scientific paper published by the University of Aizu. It has nothing to do with Nvidia. Take your useless trolling elsewhere
Talk about idiot fanboyism.

That site is run by Mark Harris a Nvidia employee. Are you so naive that hes gonna post un-bias research link on his site/blog.
Nvidia would find a way to fire him in a second if he posted links to research papers that put Nvidia in a bad light.

It only took me 1 mouse click to findout he was a Nvidia employee. Come-on now. Whos trolling now ?

Atleast show both sides or attempt to so you wont seam like a Nvidia cheerleader
The performance of DGEMM in Fermi using this algorithm is
shown in Figure 3, along with the DGEMM performance from CUBLAS 3.1.
Note that the theoretical peak of the Fermi, in this case a C2050, is 515 GFlop/s
in double precision (448 cores 1:15 GHz 1 instruction per cycle). The kernel
described achieves up to 58% of that peak.
Thats from a Oak Ridge National Labaratory along with University of Tennesse and University of Manchester in UK study.

58% is lower then 90% in DGEMM. Maybe Kepler GK100/110 has a 34% jump who knows but chip on the GTX 280 was only 34% in DGEMM.

What do i know tho. I would think Oak Ridge National Labaratory does since they use the darn things.;)
Posted on Reply
#20
HumanSmoke
XzibitTalk about idiot fanboyism.
Sure - I'll use your quotes (and mine since you obviously can't RTFP) as examples
XzibitThats from a Oak Ridge National Labaratory along with...
Yup. Which just goes to prove that real-world and theoretical numbers differ. Which is exactly as I noted. Likewise I made no assumption based upon a part whose performance is unknown...or do you have access to Kepler information that everyone outside of Nvidia and HPC projects don't?
Unless you know what K20 brings to the table, a theoretical comparison is largely useless.
So what is the DGEMM efficiency of Kepler ?
All I see here is a brief synopsis of Fermi
And of course, at no point did I make an AMD vs Nvidia comparison- quite the opposite in fact
Any comparison probably depends on actual performance efficiency rather than hypothetical
Get back under your bridge Xzibitroll - I'm sick of having to explain simple compound sentences to you.
Posted on Reply
#21
T4C Fantasy
CPU & GPU DB Maintainer
XzibitTalk about idiot fanboyism.

That site is run by Mark Harris a Nvidia employee. Are you so naive that hes gonna post un-bias research link on his site/blog.
Nvidia would find a way to fire him in a second if he posted links to research papers that put Nvidia in a bad light.

It only took me 1 mouse click to findout he was a Nvidia employee. Come-on now. Whos trolling now ?

Atleast show both sides or attempt to so you wont seam like a Nvidia cheerleader



Thats from a Oak Ridge National Labaratory along with University of Tennesse and University of Manchester in UK study.

58% is lower then 90% in DGEMM. Maybe Kepler GK100/110 has a 34% jump who knows but chip on the GTX 280 was only 34% in DGEMM.

What do i know tho. I would think Oak Ridge National Labaratory does since they use the darn things.;)
www.techpowerup.com/gpudb/923/NVIDIA_Tesla_C2050.html

previous gen NVidia architecture calculates floating points by shader clock so the C2050 would be 1Tflop of single precision
Posted on Reply
#22
Xzibit
T4C Fantasywww.techpowerup.com/gpudb/923/NVIDIA_Tesla_C2050.html

previous gen NVidia architecture calculates floating points by shader clock so the C2050 would be 1Tflop of single precision
Those test are done in Double-percision. For single-percision it would be SGEMM.
C2050 is 515 GFlop/s in double precision so its only 58% as advertised.

Kepler would have to make up alot of ground in effeciency.

The point i was try'n to make was..

Pointing to a 90% effeciency of Tahiti in DGEMM as if its a bad thing, Especially from a site/blog of a Nvidia employee.
As compared to what ? Nvidias Fermi 58% effeciency in DGEMM ? That Nvidia employee doesnt have a link to that on his site. Wonder why ?
Even if Tahiti ran 58% it still be twice as fast in DGEMM compared to Fermi.

Given K20 is similar spec to W9000 and W8000 It would have to bring its efficiency up in such a comparison.
Maybe the K20 has better effeciency but when someone says hey look AMD can only do 90% when they fail to mention Nvidia only does 58% thats kind cheerleading to me.

We need to see Keplers DGEMM effeciency to see what % it is to its specs/as advertised.

:toast:

Update:
Nvidias marketing slides put DGEMM efficiency of K20 at 80% and Fermi at 60-65%. So if Oak Ridge National Laboratories put it 2% shy of 60% I would say the window would be 78-80% efficiency for K20. So we are more then likely going to see a draw between K20 & W9000 in DGEMM if the marketing slides of 80% effeciency are met.
Posted on Reply
#23
HumanSmoke
XzibitUpdate:
Nvidias marketing slides put DGEMM efficiency of K20 at 80% and Fermi at 60-65%.
As per usual the troll can't even parse a sentence without altering the content to suit its needs:
Kepler GK110 will provide over 1 TFlop of double precision throughput with greater than 80% DGEMM efficiency
Nvidia whitepaper May 2012. (pdf)
Still, coming from someone who openly admits to lying, and up until recently didn't even know the difference between a 3D rendering card and a math co-processor, it's hardly surprising.
XzibitI lied i just wanted to
Keep up with the straw man AMD vs Nvidia bullshit and the hypothetical numbers game. I'll stand by my preference for real world testing*
HumanSmokeAny comparison probably depends on actual performance efficiency rather than hypothetical. Unless you know what K20 brings to the table, a theoretical comparison is largely useless.
*By your reasoning the AMD FirePro W9000 (3.99 TF SP, 1 TF DP) should be four times faster than a Quadro 6000 (1 TF SP, 515 GF DP)...after all, numbers don't lie right?
No...
No...
No
Posted on Reply
#24
Xzibit
HumanSmokeAs per usual the troll can't even parse a sentence without altering the content:

Nvidia whitepaper May 2012. (pdf)
Now we are taking marketing slides as facts. Guess that doesnt surprise me.

This coming from the idiot who didnt even know who ran GPGPU.ORG

Mark Harris,
Chief Technologist, GPU Computing @ Nvidia


I thought we wanted hard numbers not marketing B.S.

Are you gonna link to Jen-Hsun Huang blog next so we can get nvidia links from there aswell :laugh:
Posted on Reply
#25
HumanSmoke
XzibitNow we are taking marketing slides as facts
You may be. I'm just pointing out that you can't parse a simple sentence without including personal bias.

In fact you are the one who introduced the Nvidia information
XzibitUpdate:Nvidias marketing slides put DGEMM efficiency of K20 at 80%
So if anyone is treating marketing slides as fact...it's you.
XzibitThis coming from the ***** who didnt even know who ran GPGPU.ORG
Site reports university research paper. Linked.

Troll trys to sidetrack by highlighting a non-issue to divert attention from the fact that their straw man argument is shown to garbage. Achievement unlocked !!!
Posted on Reply
Add your own comment
Nov 15th, 2024 05:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts