Monday, November 16th 2009

New NVIDIA Tesla GPUs Reduce Cost Of Supercomputing By A Factor Of 10

NVIDIA Corporation today unveiled the Tesla 20-series of parallel processors for the high performance computing (HPC) market, based on its new generation CUDA processor architecture, codenamed "Fermi".

Designed from the ground-up for parallel computing, the NVIDIA Tesla 20-series GPUs slash the cost of computing by delivering the same performance of a traditional CPU-based cluster at one-tenth the cost and one-twentieth the power.
The Tesla 20-series introduces features that enable many new applications to perform dramatically faster using GPU Computing. These include ray tracing, 3D cloud computing, video encoding, database search, data analytics, computer-aided engineering and virus scanning.

"NVIDIA has deployed a highly attractive architecture in Fermi, with a feature set that opens the technology up to the entire computing industry," said Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee and co-author of LINPACK and LAPACK.

The Tesla 20-series GPUs combine parallel computing features that have never been offered on a single device before. These include:
  • Support for the next generation IEEE 754-2008 double precision floating point standard
  • ECC (error correcting codes) for uncompromised reliability and accuracy
  • Multi-level cache hierarchy with L1 and L2 caches
  • Support for the C++ programming language
  • Up to 1 terabyte of memory, concurrent kernel execution, fast context switching, 10x faster atomic instructions, 64-bit virtual address space, system calls and recursive functions
At their core, Tesla GPUs are based on the massively parallel CUDA computing architecture that offers developers a parallel computing model that is easier to understand and program than any of the alternatives developed over the last 50 years.

"There can be no doubt that the future of computing is parallel processing, and it is vital that computer science students get a solid grounding in how to program new parallel architectures," said Dr. Wen-mei Hwu, Professor in Electrical and Computer Engineering of the University of Illinois at Urbana-Champaign. "GPUs and the CUDA programming model enable students to quickly understand parallel programming concepts and immediately get transformative speed increases."

The family of Tesla 20-series GPUs includes:
  • Tesla C2050 & C2070 GPU Computing Processors
  • Single GPU PCI-Express Gen-2 cards for workstation configurations
  • Up to 3GB and 6GB (respectively) on-board GDDR5 memory
  • Double precision performance in the range of 520GFlops - 630 GFlops
  • Tesla S2050 & S2070 GPU Computing Systems
  • Four Tesla GPUs in a 1U system product for cluster and datacenter deployments
  • Up to 12 GB and 24 GB (respectively) total system memory on board GDDR5 memory
  • Double precision performance in the range of 2.1 TFlops - 2.5 TFlops
The Tesla C2050 and C2070 products will retail for $2,499 and $3,999 and the Tesla S2050 and S2070 will retail for $12,995 and $18,995. Products will be available in Q2 2010. For more information about the new Tesla 20-series products, visit the Tesla product pages.

As previously announced, the first Fermi-based consumer (GeForce) products are expected to be available first quarter 2010.
Add your own comment

53 Comments on New NVIDIA Tesla GPUs Reduce Cost Of Supercomputing By A Factor Of 10

#1
btarunr
Editor & Senior Moderator
So it's $3,999 if you want a GTX 380 before everyone else.
Posted on Reply
#2
Zubasa
Blah, we finally see the real Fermi.
OMG, the IO plate of this card is the exact oppsite of the HD5k series. :roll:
Up to 3GB and 6GB (respectively) on-board GDDR5 memoryi
Typo on memory.
Posted on Reply
#3
HalfAHertz
The old teslas didn't even have a display port comming out, so that's an improovement :D
Posted on Reply
#4
jessicafae
wow Q2 2010. Also the price is not a good sign ($3999). Current top Tesla (C1060) which is similar to a GTX285 sells for ~$1300. Not trying get people upset, but Geforce fermi might be really expensive (? >$600 >$800?)
Posted on Reply
#6
Zubasa
HalfAHertzQ1 ! :p
You better hope its not Q3 by the way the 40nm yeilds look :shadedshu
Posted on Reply
#7
shevanel
A few might drop by q2 2010.. then weeks/months of waiting for restock to hit.
Posted on Reply
#8
Roph
ATI should make a little more noise in this market. The compute potential in R800 is enormous.
Posted on Reply
#9
Zubasa
RophATI should make a little more noise in this market. The compute potential in R800 is enormous.
Its not because its own technology (Stream), and the standards OpenCL + Direct Compute are not yet ready to counter CUDA.
Posted on Reply
#10
kid41212003
btarunr
  • Up to 1 terabyte of memory, concurrent kernel execution, fast context switching, 10x faster atomic instructions, 64-bit virtual address space, system calls and recursive functions
The card can use up to 1TB of system memory?
btarunr
  • Double precision performance in the range of 520GFlops - 630 GFlops
That doesn't sound really impressed, anyone care to explain how powerful is this card compare to current workstation cards?
Posted on Reply
#11
shevanel
What are these cards used for? What is the main market?
Posted on Reply
#13
Benetanegia
kid41212003That doesn't sound really impressed, anyone care to explain how powerful is this card compare to current workstation cards?
Products based on GT200 has 78 Gflops of double precision performance, per GPU.

EDIT: Maybe that doesn't sound impressive yet.

Finally, notice that even the GTX 285 still gets less than twice the double precision throughput of an AMD Phenom II 940 or Intel Core i7, both of which get about 50 GFlop/s for double and don’t require sophisticated latency hiding data transfer or a complex programming model.
That's from here: perspectives.mvdirona.com/2009/03/15/HeterogeneousComputingUsingGPGPUsNVidiaGT200.aspx
shevanelWhat are these cards used for? What is the main market?
Scientists, engineers, economists... anyone with high computing requirements will greatly benefit from this. Until now most of them had to allocate computing time from a supercomputer (or build their own -> $$$$$$$$$). Now they can have something as powerful as the portion of the supercomputing they'd allocate, right on their desk, for a fraction of the money and without the need to worry about their allocating time ending before they finished their studies.
Posted on Reply
#14
kid41212003
So, with single precision, it's ~4.3 TeraFlop/s (?)
Posted on Reply
#15
Zubasa
BenetanegiaProducts based on GT200 has 78 Gflops of double precision performance, per GPU.

EDIT: Maybe that doesn't sound impressive yet.

img.techpowerup.org/091116/DP.jpg



That's from here: perspectives.mvdirona.com/2009/03/15/HeterogeneousComputingUsingGPGPUsNVidiaGT200.aspx



Scientists, engineers, economists... anyone with high computing requirements will greatly benefit from this. Until now most of them had to allocate computing time from a supercomputer (or build their own -> $$$$$$$$$). Now they can have something as powerful as the portion of the supercomputing they'd allocate, right on their desk, for a fraction of the money and without the need to worry about their allocating time ending before they finished their studies.
Thanks for explaining. :toast:
So do you know the typical performance?
How does that compare to lets ay a FireStream?
Posted on Reply
#16
Benetanegia
ZubasaThanks for explaining. :toast:
So do you know the typical performance?
How does that compare to lets ay a FireStream?
The real performance in applications (i.e Linpack) you say? I have no idea, but based on the white papers it shouldn't be less efficient than Cell, which was used in RoadRunner (#1 supercomputer until recently). In fact it sounds more efficient than Cell and RoadRunner was almost on par with other supercomputers when it comes to efficiency (Rpeak vs. Rmax). What I'm trying to say is that maybe you have to extract a 20% or so from the peak numbers to obtain real throughoutput, BUT I HAVE NO IDEA OF SUPERCOMPUTING. It's just my estimation after looking at TOP500 supercomputers and Cell and Fermi whitepapers...

www.top500.org/

EDIT: Ah, yeah. I forgot Firestream is the Ati GPGPU card, this one seems to be the fastest one: ati.amd.com/technology/streamcomputing/product_firestream_9270.html

It says 250 GFlops of peak double precision. It's hard to say and I'm probably going to be flamed and called fanboy, but the actual throughoutput is probably much much lower. That's the same DP Gflops as a HD4870 card would have (it seems based on RV770 anyway) and based on how the Ati cards perform compared to Nvidia cards in things like F@H, IMO it's real Gflops have to be more like 50 Gflops.
Posted on Reply
#17
Zubasa
BenetanegiaThe real performance in applications (i.e Linpack) you say? I have no idea, but based on the white papers it shouldn't be less efficient than Cell, which was used in RoadRunner (#1 supercomputer until recently). In fact it sounds more efficient than Cell and RoadRunner was almost on par with other supercomputers when it comes to efficiency (Rpeak vs. Rmax). What I'm trying to say is that maybe you have to extract a 20% or so from the peak numbers to obtain real throughoutput, BUT I HAVE NO IDEA OF SUPERCOMPUTING. It's just my estimation after looking at TOP500 supercomputers and Cell and Fermi whitepapers...

www.top500.org/
The thing about the ATi cards is that their SMID architechure seems less flexible than nVidia's MIMD route.
That is the reason I have doubts on its performance.

I am trying to understand this: :p
perspectives.mvdirona.com/2009/03/18/HeterogeneousComputingUsingGPGPUsAMDATIRV770.aspx
Posted on Reply
#18
PP Mguire
This is proof there is a gt300. So where is our desktop cards huh nvidia?
Posted on Reply
#19
WarEagleAU
Bird of Prey
Pretty impressive to lower costs that much.

@Zubasa, why isn't ATI and Stream with Open CL ready to go against Cuda?
Posted on Reply
#20
Zubasa
PP MguireThis is proof there is a gt300. So where is our desktop cards huh nvidia?
It is also prove that there are simply not a significant amount of them for retail.:shadedshu
They rather sell Teslas for thousands of dollars instead of hundreds for desktop parts. :respect:

Edit: The nVidia site also states that the Geforce should be ready for Q1, hope that is not a paper launch.
WarEagleAUPretty impressive to lower costs that much.

@Zubasa, why isn't ATI and Stream with Open CL ready to go against Cuda?
Well there is hardly anything that uses OpenCL yet, in fact ATi haven't release drivers that enables OpenCL and DirectCompute on older cards.
"Older" includes all the HD3k and 4k series.
Stream is in a even more pityful state, I hardly knows any software that supports it apart from stuff from Adobe.

Edit: According to Bjorn3D, there are a little more...
www.bjorn3d.com/read.php?cID=1408&pageID=5778
* Adobe Acrobat®Reader: “Up to 20%* performance improvement when working with graphically rich, high resolution PDF files when compared to using the CPU only”
* Adobe Photoshop CS4® Extended: “Accelerated image and 3D model previewing (panning, zooming, rotation) and 3D manipulations to photos, for example mapping an image onto a 3D object”
* Adobe After Effects®CS4: “Allows for the rapid application of special effects to digital media”
* Adobe Flash®10: “Dynamic, graphically engaging Web content designed with these capabilities in mind”
* Microsoft Windows Vista®: “Harness stream processing to make image adjustments on the fly in Microsoft’s Picture Viewer application”
* Microsoft Expression®Encoder: “Accelerated encoding of content for Microsoft®Silverlight™, Windows Media video and audio”
* Microsoft Office® PowerPoint 2007: “Acceleration of slideshow playback for smooth animations, transitions and slide display”
* Microsoft Silverlight: “Unlocking the full potential for web based multi-media and robust user experience and interface”
Posted on Reply
#21
Benetanegia
ZubasaWell there is hardly anything that uses OpenCL yet, in fact ATi haven't release drivers that enables OpenCL and DirectCompute on older cards.
"Older" includes all the HD3k and 4k series.
Stream is in a even more pityful state, I hardly knows any software that supports it apart from stuff from Adobe.
Not to mention that CUDA has Visual Studio integration and many more tools, profilers, debuggers...

It's also a high level language* and that makes easier to program for than the other ones which are low-medium level languages.

Nvidia did really put a lot of effort into GPGPU since G80 days and it's really paying off now.

*You can still access low level if you wish, you can get pretty close to silicon.
Posted on Reply
#22
Yukikaze
As someone who is currently dabbling in OpenCL code on GT200 and G9X cards, the architectural changes are quite impressive over the previous series and will make a programmer's life easier.

But now is the question: WHERE IS MY GODDAMNED GTX380 ?!?!?! :D
Posted on Reply
#23
[H]@RD5TUFF
jessicafaewow Q2 2010. Also the price is not a good sign ($3999). Current top Tesla (C1060) which is similar to a GTX285 sells for ~$1300. Not trying get people upset, but Geforce fermi might be really expensive (? >$600 >$800?)
Don't you think it's a bit early for speculation? Also, you can't compare, industrial grade hardware meant for super computing, to consumer grade products! Seriously, use your head.
Posted on Reply
#24
Zubasa
I know this is getting off topic, but what exactly is this?
It comes with CCC suite 9.10. :confused:
Posted on Reply
#25
Benetanegia
ZubasaI know this is getting off topic, but what exactly is this?
It comes with CCC suite 9.10. :confused:
img.techpowerup.org/091116/Capture004.jpg
The free AMD video transcoding application, I guess. In it's first itterations was extremely buggy and useless, because it produced massive artifacts on videos. I have not heard since, so I don't know it it has improved.

PD. I don't even know for sure if it's that TBH. :laugh:
Posted on Reply
Add your own comment
Nov 19th, 2024 08:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts