# AMD Introduces the FirePro S10000 Server Graphics Card



## Cristian_25H (Nov 12, 2012)

AMD today launched the AMD FirePro S10000, the industry's most powerful server graphics card, designed for high-performance computing (HPC) workloads and graphics intensive applications. 

The AMD FirePro S10000 is the first professional-grade card to exceed one teraFLOPS (TFLOPS) of double-precision floating-point performance, helping to ensure optimal efficiency for HPC calculations. It is also the first ultra high-end card that brings an unprecedented 5.91 TFLOPS of peak single-precision and 1.48 TFLOPS of double-precision floating-point calculations. This performance ensures the fastest possible data processing speeds for professionals working with large amounts of information. In addition to HPC, the FirePro S10000 is also ideal for virtual desktop infrastructure (VDI) and workstation graphics deployments.



 





"The demands placed on servers by compute and graphics-intensive workloads continues to grow exponentially as professionals work with larger data sets to design and engineer new products and services," said David Cummings, senior director and general manager, Professional Graphics, AMD. "The AMD FirePro S10000, equipped with our Graphics Core Next Architecture, enables server graphics to play a dual role in providing both compute and graphics horsepower simultaneously. This is executed without compromising performance for users while helping reduce the total cost of ownership for IT managers."

Equipped with AMD next-generation Graphics Core Next Architecture, the FirePro S10000 brings high performance computing and visualization to a variety of disciplines such as finance, oil exploration, aeronautics, automotive design and engineering, geophysics, life sciences, medicine and defense. With dual GPUs at work, professionals can experience high throughput, low latency transfers allowing for quick compute of complex calculations requiring high accuracy.

*Responding to IT Manager Needs*
With two powerful GPUs in one dual-slot card, the FirePro S10000 enables high GPU density in the data center for VDI and helps increase overall processing performance. This makes it ideal for IT managers considering GPUs to sustain compute and facilitate graphics intensive workloads. Two on-board GPUs can help IT managers reap significant cost savings, replacing the need to purchase two single ultra-high-end graphics cards, and can help reduce total cost of ownership (TCO) due to lower power and cooling expenses.

*Key Features of AMD FirePro S10000 Server Graphics*

 ● Compute Performance: The AMD FirePro S10000 is the most powerful dual-GPU server graphics card ever created, delivering up to 1.3 times the single precision and up to 7.8 times peak double-precision floating-point performance of the competition's comparable dual-GPU product. It also boasts an unprecedented 1.48 TFLOPS of peak double-precision floating-point performance;
 ● Increased Performance-Per-Watt: The AMD FirePro S10000 delivers the highest peak double-precision performance-per-watt -- 3.94 gigaFLOPS -- up to 4.7 times more than the competition's comparable dual-GPU product;
 ● High Memory Bandwidth: Equipped with a 6GB GDDR5 frame buffer and a 384-bit interface, the AMD FirePro S10000 delivers up to 1.5 times the memory bandwidth of the comparable competing dual-GPU solution;
 ● DirectGMA Support: This feature removes CPU bandwidth and latency bottlenecks, optimizing communication between both GPUs. This also enables P2P data transfers between devices on the bus and the GPU, completely bypassing any need to traverse the host's main memory, utilize the CPU, or incur additional redundant transfers over PCI Express, resulting in high throughput low-latency transfers which allow for quick compute of complex calculations requiring high accuracy;
 ● OpenCL Support: OpenCL has become the compute programming language of choice among developers looking to take full advantage of the combined parallel processing capabilities of the FirePro S10000. This has accelerated computer-aided design (CAD), computer-aided engineering (CAE), and media and entertainment (M&E) software, changing the way professionals work thanks to performance and functionality improvements.

Please visit AMD at SC12, booth #2019, to see the AMD FirePro S10000 power the latest in graphics technology.

*View at TechPowerUp Main Site*


----------



## i_dog_69 (Nov 12, 2012)

Very nice! Looks like a monster!


----------



## Ghost (Nov 12, 2012)

Looks like a reference 7990.


----------



## eidairaman1 (Nov 12, 2012)

Ghost said:


> Looks like a reference 7990.



Only its a dual slot card n not tri slot


----------



## Ghost (Nov 12, 2012)

eidairaman1 said:


> Only its a dual slot card n not tri slot


What?


----------



## eidairaman1 (Nov 12, 2012)

Ghost said:


> What?



look at all 7990 /7970x2 by powercolor etc n compare them to this card


----------



## Ghost (Nov 12, 2012)

eidairaman1 said:


> look at all 7990 /7970x2 by powercolor etc n compare them to this card


They are all custom cards. This is what reference 7990 should have been if it was released.


----------



## eidairaman1 (Nov 12, 2012)

Ghost said:


> They are all custom cards. This is what reference 7990 should have been if it was released.



im not sure about this being that honestly


----------



## HumanSmoke (Nov 12, 2012)

375W TDP for a server part!...that should go down well.


S10000...1.48 TFlops FP64 @ 375W
K20X......1.31 TFlops FP64 @ 235W

Must be one hell of a niche market


----------



## Prima.Vera (Nov 12, 2012)

Where can we find a review and a comparison test please?


----------



## Recus (Nov 12, 2012)

AMD go out of HPC market in 3.. 2.. 1..


----------



## PLSG08 (Nov 12, 2012)

I wouldn't want to see a 7990... I'll just wait for a 8990 since releasing a 7990 now would be pointless


----------



## the54thvoid (Nov 12, 2012)

Recus said:


> AMD go out of HPC market in 3.. 2.. 1..



Apart from all those Opterons in Titan working in perfect co-existence with the Keplers.


----------



## felixsanchez99 (Nov 12, 2012)

*Fps*

I've always wondered about these cards, how many fps would you get with this card for like Bf3 or Crysis?


----------



## repman244 (Nov 12, 2012)

felixsanchez99 said:


> I've always wondered about these cards, how many fps would you get with this card for like Bf3 or Crysis?



Less than Radeon cards, drivers are optimized for OpenGL (CAD etc.), some also have ECC GDDR which has it's impact.


----------



## Zubasa (Nov 12, 2012)

Ghost said:


> They are all custom cards. This is what reference 7990 should have been if it was released.


Note that his is basically a dual S9000/7950 spec-wise.
With 2 Tahiti XT cores it will require more cooling.


----------



## Ghost (Nov 12, 2012)

Zubasa said:


> Note that his is basically a dual S9000/7950 spec-wise.
> With 2 Tahiti XT cores it will require more cooling.



Spec table wasn't there when I posted. I think.

Anyway, huge power draw could be the reason why AMD didn't release 7990 in the first place. It's ok for custom cards to have enormous power draw.


----------



## Kreij (Nov 12, 2012)

BAH !! The TDP of my pair of 6970s is 500W (250 each). I'll take one please.


----------



## Mindweaver (Nov 12, 2012)

I'd love to have this card for my Solidworks 2012. I'm still using a FireGL v5200, and it's still a kickass card. I'd really like to see what these will crunch, and fold as well. This card could be anywhere from $1500 to $2500...


----------



## Cortex (Nov 12, 2012)

No major FirePro S supercomputers yet, unless I'm not well informed.

S10000: Worse perf/watt than K20/Knight's point, better overall performance if theoretical/actual linpack score is good (>~70 percent).


----------



## T4C Fantasy (Nov 12, 2012)

http://www.techpowerup.com/gpudb/802/AMD_FirePro_S10000.html


----------



## Recus (Nov 12, 2012)

the54thvoid said:


> Apart from all those Opterons in Titan working in perfect co-existence with the Keplers.



Working? More like just chilling.


----------



## T4C Fantasy (Nov 12, 2012)

Recus said:


> Working? More like just chilling.
> 
> http://pcper.com/files/news/2012-11-12/Screenshot (360).png



i think what would be on everyones mind is when we can get that kind of power in 1 chip 27 petaflops


----------



## HumanSmoke (Nov 12, 2012)

Mindweaver said:


> I'd love to have this card for my Solidworks 2012. I'm still using a FireGL v5200, and it's still a kickass card. I'd really like to see what these will crunch, and fold as well. This card could be anywhere from $1500 to $2500...



$3599 (U.S.) according to the net.


----------



## Xzibit (Nov 12, 2012)

HumanSmoke said:


> 375W TDP for a server part!...that should go down well.
> 
> 
> S10000...1.48 GFlops FP64 @ 375W
> ...



From the Press Release it looked to be aimed at Quadro.

The market i think they are aiming for is SP users with no DP or minimal DP workload who want to save $ & space.

1 S1000
5.91 TFLOP
$3599 Est

2 K5000
4.2 TFLOP
$4498 (Each $2249) Est

2 K20
7.0 TFLOP
$6398 (Each $3199) Est


----------



## HumanSmoke (Nov 12, 2012)

Xzibit said:


> From the Press Release it looked to be aimed at Quadro.


Well, firstly, since there is no longer a FireStream product line, FirePro is now aimed at HPC (hence being unveiled at SC12) , as well as workstation. I'll let Dave Baumann (AMD Product Manager) make the distinction:


> "S" pretty much stands for server, and these are targetted towards number crunching workloads not CAD workloads; Visualisation and sim are more maths problems. So these are more inline with prior "Firestream" offering than Quadro competitors


Secondly, there will be a GK110 Quadro. Bank it. There has never been a Tesla part that didn't have a Quadro counterpart


Xzibit said:


> The market i think they are aiming for is SP users with no DP or minimal DP workload who want to save $ & space.


Which is what I meant by "niche". SP performance in a relatively compact form factor ( a desktop ATX specification) where noise and performance per watt are irrelevant. What make it even more niche is the fact that the 7970X2/7990 offer the same functionality for single precision and cost a fifth of the price.

Of course, if the application requires single precision performance, a K20 isn't required. A K10 (4.58TFlop/s FP32) would suffice, and even using Amazon's/Sabre PC  inflated prices ($3200) the numbers are a lot closer than what you're painting.

S10000...$3599...5.91TFlop....1.64 GFlop/$....15.76 GFlop/watt
K10........$3200...4.85TFlop....1.51 GFlop/$....21.56 GFlop/watt


----------



## Xzibit (Nov 12, 2012)

Like I said I think its a happy medium in the price for those with no or minimal DP workload.

S10000
(SP) 5.91 TFLOPS 
(DP) 1.48 TFLOPS

K10
(SP) 4.58 TFLOPS 
(DP) 0.19 TFLOPS


----------



## repman244 (Nov 12, 2012)

HumanSmoke said:


> Secondly, there will be a GK110 Quadro. Bank it. There has never been a Tesla part that didn't have a Quadro counterpart



But another thing to consider is that this is the first time Tesla uses a chip that wasn't in the consumer counterpart first, which can be an indication that it was built for only Tesla in mind.

But that's just my guessing, only time will tell what exactly is the chip meant for


----------



## HumanSmoke (Nov 12, 2012)

Xzibit said:


> Like I said I think its a happy medium in the price for those with no or minimal DP workload.


True enough to a degree, but then there is a reason that Nvidia command 80+% of the pro market- namely a better and more evolved professional driver and software environment. The toolkits offered by AMD aren't even close to that offered Tesla and Quadro (SceniX, CompleX and OptiX , or the CUDA SDK for example in relation to AMD's APP (ex-Stream) SDK). If hardware were the only criteria involved then AMD wouldn't be in the present situation regarding workstation and HPC GPGPU.


repman244 said:


> But another thing to consider is that this is the first time Tesla uses a chip that wasn't in the consumer counterpart first, which can be an indication that it was built for only Tesla in mind.
> But that's just my guessing, only time will tell what exactly is the chip meant for



Ryan Smith (Anandtech) seems to think a Quadro variant will eventuate (2nd page of the comments in the link below). Bearing in mind that ORNL were the lead customer and Nvidia seem to have their hands full satisfying Tesla demand...


> Interestingly NVIDIA tells us that their yields are terrific – a statement backed up in their latest financial statement – so the problem NVIDIA is facing appears to be demand and allocation rather than manufacturing.


...it would seem that Nvidia are fulfilling HPC contracts first.
What would you expect Nvidia to do with the full 15SMX GPU's and other GPU's that are likely to fall outside of server power budget? It would seem you either fuse off perfectly good blocks for 15SMX parts, or throw away high leakage GPU's that could be utilized in a 250+W consumer card- and gain some PR into the bargain. Looking at Nvidia's past record, I'm pretty certain which course of action they would likely take.


----------



## [H]@RD5TUFF (Nov 13, 2012)

Seems nice, and prolly makes a joke of most CAD processes.


----------



## KooKKiK (Nov 13, 2012)

HumanSmoke said:


> 375W TDP for a server part!...that should go down well.
> 
> 
> S10000...1.48 TFlops FP64 @ 375W
> ...



AMD and nVidia doesn't measure TDP in the same way.


----------



## 3870x2 (Nov 13, 2012)

repman244 said:


> Less than Radeon cards, drivers are optimized for OpenGL (CAD etc.), some also have ECC GDDR which has it's impact.



They are usually on par, performing identical to their desktop counterparts.  While they have additional parts to them, the GPU that would run graphics is very much the same.

My NV 4600 (8800GTX) can max out CS:GO and play anything out there currently.  These cards still need to have the ability to run DirectX applications just as well as OpenGL.


----------



## HumanSmoke (Nov 13, 2012)

KooKKiK said:


> AMD and nVidia doesn't measure TDP in the same way.



Who knows? Looking at non-boost cards from both AMD and Nvidia shows that each tends to use around 80-85% of TDP in average workloads, and ~95% for peak workload. A quick look at W1zzards power consumption charts for graphics cards reviews should bear that out.

If the S10000 isn't specced for 375W use;
1. Why does AMD specify 375W board power for a part which has no capacity for boost/overclocking?, and,
2. The "S" series are all passively cooled  with the exception of the S10000 which obviously requires three fans. Why would that be?


----------



## KooKKiK (Nov 14, 2012)

HumanSmoke said:


> Who knows? Looking at non-boost cards from both AMD and Nvidia shows that each tends to use around 80-85% of TDP in average workloads, and ~95% for peak workload. A quick look at W1zzards power consumption charts for graphics cards reviews should bear that out.
> 
> If the S10000 isn't specced for 375W use;
> 1. Why does AMD specify 375W board power for a part which has no capacity for boost/overclocking?, and,
> 2. The "S" series are all passively cooled  with the exception of the S10000 which obviously requires three fans. Why would that be?



No. You don't understand.

for AMD, TDP means the maximum power that can be delivered to graphic board.

but for nVidia, TDP means power that limited by their power limiter.


take a look at HD6970 that has 250W TDP VS GTX580 244W TDP

how on earth that 6970 has higher power consumption than the beast 580 ???


then look at the 'real' power consumption tested by Wizz









even the next gen graphics, 7870 ( 175W TDP ) VS 660 ( 130W TDP ) also follow this trend.







So direct TDP comparison between both companies doesn't make any sense at all.



PS. the 6990 also has 375W of TDP.


----------



## eidairaman1 (Nov 14, 2012)

Its the same for Intel, there is no set standard of Measuring TDP, just like Monitor response times.

its just Denial some people are in



KooKKiK said:


> No. You don't understand.
> 
> for AMD, TDP means the maximum power that can be delivered to graphic board.
> 
> ...


----------



## HumanSmoke (Nov 14, 2012)

KooKKiK said:


> No. You don't understand.
> for AMD, TDP means the *maximum power *that can be delivered to graphic board.
> but for nVidia, TDP means power that limited by their power limiter.


True enough, but if you're measuring TDP when there are a couple of caveats. Firstly, power consumption measurement should be maximum load. We are talking about GPGPU here, and these boards don't spend much (if any) time at rest or under light load. Average load for gaming would would represent a heavily underutilized co-processor. 
Secondly, the Fermi cards are blatantly fudged by Nvidia for PR purposes. I wouldn't argue that Fermi cards TDP aren't based on wishful thinking, but the issue here is Tahiti (S10000) and Kepler (K20)
Your analogy would be HD 7970 ( 250W board power) vs GTX 680 (195W TDP) - although that an apples-to-apples comparison either since Tesla lacks the boost facility. Closer would be a non-boost Kepler vs non-boost Southern Islands. But since W1zz hasn't tested any stock cards, maybe check out another site...
GTX 650Ti (110W TDP).......191W system load (173.6% of TDP)
HD 7850   (130W board).....216W system load (166.2% of TDP)
Not a huge difference.


KooKKiK said:


> PS. the 6990 also has 375W of TDP.


True. And it's maximum power draw is 404 watts.

TH put the single GPU FirePro W9000 through it's paces earlier (the S9000 is a passive version of the same card). The card is basically a HD 7970 non-GHz edition with 6GB VRAM and 225W board power. Under GPGPU the card clocked 275W


----------



## KooKKiK (Nov 14, 2012)

you can't have nVidia card at its maximum power because power limiter will cut off anything before reaching that point.


look at the Furmark test of GTX680 by Wizz







and compare to the gaming test







you see a difference between Furmark and gaming test of 680 compared to 7970 ???


----------



## eidairaman1 (Nov 14, 2012)

ya NV forced any Voltage Mods out (EVBot being the biggest example of this)



KooKKiK said:


> you can't have nVidia card at its maximum power because power limiter will cut off anything before reaching that point.
> 
> 
> look at the Furmark test of GTX680 by Wizz
> ...


----------



## HumanSmoke (Nov 16, 2012)

KooKKiK said:


> you can't have nVidia card at its maximum power because power limiter will cut off anything before reaching that point.


Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the rack specification (more often than not) of 225W per board.

If a power limiter affected stated performance you'd have an argument, but as the case stands, you are making excuses not a valid point. And just for the record, the gaming charts don't have a direct bearing on server/WS/HPC parts- as I mentioned before, you can't get a true apples-to-apples comparison between gaming and pro parts- all they can do is provide an inkling into the efficiency of the GPU. If you want to use a gaming environment argument, why don't you take it to a gaming card thread, because it is nonsensical to apply it to co-processors.



eidairaman1 said:


> ya NV forced any Voltage Mods out (EVBot being the biggest example of this)



Because volt modding is (of course) the first requirement for server co-processors [/sarcasm]
Take your bs to a gaming thread.


----------



## Steevo (Nov 16, 2012)

Its true, when the GCN is in use under full load like GPGPU it is hungry, no parts of the chip are powered down and the marching instructions fit in the caches meaning transistors are busy. 

But thats also why they manage a huge score in most GPGPU applications. And compare the power efficiency per FLOP even if it draws a few extra watts....

http://www.tomshardware.com/charts/2012-vga-gpgpu/15-GPGPU-Luxmark,2971.html

http://www.tomshardware.com/charts/2012-vga-gpgpu/14-GPGPU-Bitmining,2970.htmlb


----------



## Xzibit (Nov 16, 2012)

HumanSmoke said:


> Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the rack specification (more often than not) of 225W per board.



Maybe I dont understand what your saying but 225w rack specification ?

How old is that PSU in there.  Swap it for a new one with correct connections.


----------



## HumanSmoke (Nov 16, 2012)

@Steevo
Fix yo links.

True enough that Tahiti/GCN is optimized for GPGPU, but then GK104 is just the opposite....and if the S10000 were a desktop gaming card I certainly wouldn't disagree with the premise, but if server GPGPU is the point of the discussion- and it should be for this thread- shouldn't the comparison be between server parts? Seems a little pointless making a case for the S10000 using desktop cards running at higher clocks using desktop drivers, while comparing them to deliberately compute hobbled Nvidia counterparts.

Wouldn't a more apropos comparison be gained by testing server parts to server parts ?
(BTW: The W/S9000 is a Tahiti part (3.23TFlop), the Quadro 6000 is a Fermi GF100 (1.03TFlop) based on the GTX 470 ).



Steevo said:


> But thats also why they manage a huge score in most GPGPU applications. And compare the power efficiency per FLOP even if it draws a few extra watts....


Single precision.......................................................Double precision
W/S9000.(225W)...3.23FTlop....14.36 GFlop/watt.......0.81 TFlop.....3.58 GFlop/watt
S10000...(375W)...5.91TFlop....15.76 GFlop/watt........1.48 TFlop....3.95 GFlop/watt
K10........(225W)...4.85TFlop....21.56 GFlop/watt........0.19 TFlop.... Negligable
K20........(225W)...3.52TFlop....15.64 GFlop/watt........1.17 TFlop.....5.20 GFlop/watt
K20X......(235W)...3.95TFlop....16.81 GFlop/watt........1.31 TFlop.....5.57 GFlop/sec

And, for all the hoo-hah regarding the S10000 powering the SANAM system to number two in the Green500 list, the placement still relies more upon the asymmetric setup of the computer. 420 S10000's vs 4800 Xeon E5-2650's


Xzibit said:


> Maybe I dont understand what your saying but 225w rack specification ?


Servers and HPC racks in general are built around a 225W per board  specification. Example HP , and from Anandtech...


> K20X will be NVIDIA’s leading Tesla K20 product, offering the best performance at the highest power consumption (235W). K20 meanwhile will be cheaper, a bit slower, and perhaps most importantly lower power at 225W. *On that note, despite the fact that the difference is all of 10W, 225W is a very important cutoff in the HPC space – many servers and chasses are designed around that being their maximum TDP for PCIe cards *–


for example. That is why pro co-processors and are invariably rated at 225 watts. Check the specifications for top tier FireStream, FirePro, Quadro and Tesla. All the top SKU's are geared for 225W power envelope.


----------



## Xzibit (Nov 16, 2012)

HumanSmoke said:


> Servers and HPC racks in general are built around a 225W per board  specification. Example HP , and from Anandtech...
> 
> for example. That is why pro co-processors and are invariably rated at 225 watts. Check the specifications for top tier FireStream, FirePro, Quadro and Tesla. All the top SKU's are geared for 225W power envelope.



Still dont get what your implying



> Right-sized HP ProLiant power supplies from 460 Watts at 94% efficiency, 750 Watts at 94% efficiency, to 1200 Watts at 94% efficiency


----------



## Steevo (Nov 17, 2012)

HumanSmoke said:


> @Steevo
> Fix yo links.
> 
> True enough that Tahiti/GCN is optimized for GPGPU, but then GK104 is just the opposite....and if the S10000 were a desktop gaming card I certainly wouldn't disagree with the premise, but if server GPGPU is the point of the discussion- and it should be for this thread- shouldn't the comparison be between server parts? Seems a little pointless making a case for the S10000 using desktop cards running at higher clocks using desktop drivers, while comparing them to deliberately compute hobbled Nvidia counterparts.
> ...



http://m.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265-7.html


Funny how much faster the Radeon is.


----------



## HumanSmoke (Nov 17, 2012)

Xzibit said:


> Still dont get what your implying



Well that's rather unfortunate. 
Most other people would realize that 225W input power means 225W heat dispersal requirement, as well as power requirement.


Steevo said:


> http://m.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265-7.html
> Funny how much faster the Radeon is.


Wow! Tahiti's faster than a GTX 470  in LightWave, Ensight, SolidWorks, bitmining, Luxmark and CAPS viewer. Colour me surprised. I am truly shocked and stunned!
Isn't it more surprising that the latest generation AMD GPU isn't overly convincing against a two generations old Nvidia GTX 470 in AutoCAD 2013, May 2013 and Siemens freeform modelling?
BTW
You missed out the Maya benches in the same review...
and you  missed out the Catia benches in the same review...
and you missed out the Pro/ENGINEER benches in the same review...
and you missed out the Siemens Visualization benches in the same test...

Then of course you've got AMD's forte- OpenCL - which is also a very mixed bunch. AMD is strong in Image processing, but the Video benches and general benchmarks are pretty much a wash. Shouldn't Tahiti be putting up better numbers against a GTX 470 than this?





Not to worry though, I bet the GK104 and GK110 based Tesla and Quadro will be shit at everything- and if they aren't, you can just look at the pages you like- just like the TH review.
(Better not look at the HotHardware review)


----------



## xorbe (Nov 17, 2012)

Quadro 6000 is like from July 2010 (about 2.5 years ago.)


----------



## Xzibit (Nov 17, 2012)

HumanSmoke said:


> Well that's rather unfortunate.
> Most other people would realize that 225W input power means 225W heat dispersal requirement, as well as power requirement.



I still dont get what you mean ?

Can you link me to the 225w specification.  I've never seen it.



HumanSmoke said:


> Servers and HPC racks in general are built around a 225W per board  specification. Example HP , and from Anandtech...
> 
> for example. That is why pro co-processors and are invariably rated at 225 watts. Check the specifications for top tier FireStream, FirePro, Quadro and Tesla. All the top SKU's are geared for 225W power envelope.



I'd be unfortunate if you were implying that 225w is the limit and kind of halarious

Linking to SL390s G7 implying thats the standard is baffiling.



> • 2U half width tray offer GPU density, offering up to 3 GPUs (up to 225 watt) in the equivalent space as a 1U server
> • 4U half width tray offer GPU density, offering up to 8 GPUs (up to 225 watt) in the equivalent space as a 2U server



The (up to 225 watt) is configuration where the available PSUs options only provide for (2) 6-pin connecters per slot.  Not the 225w your implying per board 

PCIe Gen 2 = 75w
(2) 6-pin = 150w (75w each)
Total = 225w
*Not a Server Specification*


----------



## HumanSmoke (Nov 17, 2012)

Xzibit said:


> I still dont get what you mean ?


What I mean, and Anand for that matter, is that server racks are more often than not optimized for 225W per PCIE unit, both for cooling, power usage, and cabling. What's so hard to understand?


Xzibit said:


> I'd be unfortunate if you were implying that 225w is the limit and kind of hilarious


You mean:


HumanSmoke said:


> Servers and HPC racks in general are *built around a 225W per board*  specification


What the above sentence actually says is that server racks in general are designed with a 225W board in mind. How you think that translates into a PCI-SIG specification is beyond me, because if that were the case, the K20X (235W) and S10000 (375W) would exceed it. Moreover, do you really expect large server farms, supercomputing clusters and data centres to use ATX PSU's ? :shadedshu
So, yet another instance of were Xzibit's reading skills don't reach the mark.

Just to reiterate. Most racks are pretty standardized which is why most vendors limit themselves to a 225W add in board. The other thing to consider is upgrades of previous generation systems- a 225W swap out for a 225W board is relatively painless. A swap out for a higher TDP board may require more extensive work.


Xzibit said:


> The (up to 225 watt) is configuration where the available PSUs options only provide for (2) 6-pin connecters per slot.  Not the 225w your implying per board


Of course if I actually said anything like that...but I didn't. 225W is a general standard that server vendors have adopted- don't believe me, check Cisco, HP, Dell, Penguin or any other server manufacturer and see how many are 225W per add-in-board and how many are, say 300W PCI-SIG.

Of course I don't expect you to actually do this, since it require you to:
1. Be able to parse the information correctly, and
2. Require you to actually spend some time doing research, and
3. You'd give up as soon as you saw the number of vendors' models specced for 225W boards.

Given that you can't even work out basic information about a company or its ownership, I'm not confident you'll fare any better with a companies product line- so I'm not expecting anything else but some worthless trolling


xorbe said:


> Quadro 6000 is like from July 2010 (about 2.5 years ago.)


Yep. GF100 has since been superseded by GF110, which in turn has been superseded by GK104/110


----------



## KooKKiK (Nov 17, 2012)

TDP doesn't stand for the 'REAL' power consumption.

and both companies do not measure TDP in the same way.


that is my point.

hope you understand.


----------



## Xzibit (Nov 17, 2012)

HumanSmoke said:


> What I mean, and Anand for that matter, is that server racks are more often than not optimized for 225W per PCIE unit, both for cooling, power usage, and cabling. What's so hard to understand?
> 
> You mean:
> 
> ...



Hey, smarty pants all those cards still use 6-pin and/or 8-pin AuX connectors.



			
				HumanSmoke said:
			
		

> Just to reiterate. Most racks are pretty standardized which is why most vendors limit themselves to a 225W add in board. The other thing to consider is upgrades of previous generation systems- a 225W swap out for a 225W board is relatively painless. A swap out for a higher TDP board may require more extensive work.
> 
> Of course if I actually said anything like that...but I didn't. 225W is a general standard that server vendors have adopted- don't believe me, check Cisco, HP, Dell, Penguin or any other server manufacturer and see how many are 225W per add-in-board and how many are, say 300W PCI-SIG.



My point is 225w is an implied specification.  Nothing stoping someone from putting a higher TDP card there other than dated hardware.

If you have links i'd like to see them tho, remember..




			
				HumanSmoke said:
			
		

> Verifiable numbers or STFU.


----------



## HumanSmoke (Nov 17, 2012)

Nah, didn't think so.



Xzibit said:


> Hey, smarty pants all those cards still use 6-pin and/or 8-pin AuX connectors.


Considering the PCI-SIG rates the PCI-E slot for a nominal 75W power delivery, where the hell else do you think the board draws its power from?

You think a SC cluster or data centre has ATX PSU's  ??

Maybe you should watch this and point out where the PSU's are, or maybe tell these guys they're doing it wrong.


Xzibit said:


> My point is 225w is an implied specification


Which is already what I've said...and much earlier than you did, so why the bleating? Oh, I know why,...you just need to troll.


Xzibit said:


> Nothing stoping someone from putting a higher TDP card there other than dated hardware


Nothing at all, except possibly change the cooling and power cabling - and no I don't mean just the individual 6 and 8 pin PCI-E connectors. I mean the main power conduits from the cabinets to the power source. Then of course if a cabinet is being refitted for S10000 then you would have to re-cable all 42 racks in a cabinet for 2 x 8-pin instead of the nominal 6-pin + 8-pin at four cables per rack multiplied by the number of boards per rack, as well as the main power conduits...then of course you'd have to upgrade the cooling system -which for most big iron is water cooling and refrigeration.


----------



## Xzibit (Nov 17, 2012)

HumanSmoke said:


> Considering the PCI-SIG rates the PCI-E slot for a nominal 75W power delivery, where the hell else do you think the board draws its power from?



Thats a PCIe gen 2.0 slot incase you haven noticed

Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability. So yes if you get more recent parts you get more options. I'm sure you'll see them in G8 series of that HP server you linked.  The lower numerical versions already have updated MB with Gen 3 slots added.  So there is one possibility.

Only ones that can currently take advantage of it are Intel and AMD cards since they are PCIe gen 3.0 spec.  All Nvidias K20x & K20 are PCIe gen 2.0 spec.



HumanSmoke]
You think a SC cluster or data centre has ATX PSU's  ??
[/QUOTE]

Your too funny.  Twice you mentioned ATX PSU. Your the only one bringing it up.  Somehow you took the AuX connectors and made the leap to ATX  :laugh:

[QUOTE=HumanSmoke]
Which is already what I've said...and much earlier than you did said:


> Makes no difference. The point is performance/watt, or in the case of *servers/HPC*, staying within the *rack specification* (more often than not) of 225W per board.



I see pural and specifications.  I'd like to see the information your refering to for myself thats all.



			
				HumanSmoke said:
			
		

> Nothing at all, except possibly change the cooling and power cabling - and no I don't mean just the individual 6 and 8 pin PCI-E connectors. I mean the main power conduits from the cabinets to the power source. Then of course if a cabinet is being refitted for S10000 then you would have to re-cable all 42 racks in a cabinet for 2 x 8-pin instead of the nominal 6-pin + 8-pin at four cables per rack multiplied by the number of boards per rack, as well as the main power conduits...then of course you'd have to upgrade the cooling system -which for most big iron is water cooling and refrigeration.



Obviously something taken into consideration when these machines were built

So how about that specification link ?


----------



## repman244 (Nov 17, 2012)

HumanSmoke said:


> What the above sentence actually says is that server racks in general are designed with a 225W board in mind.



One thing to consider here is that these cards go into a custom designed HPC where the standard "server" design is less common.
You have custom cooling, custom power delivery etc.. You can see that if you look at Cray's HPC's...


----------



## HumanSmoke (Nov 17, 2012)

repman244 said:


> One thing to consider here is that these cards go into a custom designed HPC where the standard "server" design is less common.
> You have custom cooling, custom power delivery etc.. You can see that if you look at Cray's HPC's...



Yeah, I figured that SANAM for instance is new build from Adtech (the S10000 supercomputer), and all new builds would be pretty straightforward to put together (once you know the requirements) regardless of fit out - they all seem based on a modular approach whether they be compute cluster or data center. My thinking was more along the lines of refitting older systems with newer more competent components - there are still a lot of big clusters running older GPGPU for instance- and I would assume a refit presents its own problems different from a ground up new build.
Refitting in general would be a considerable initial expenditure- Titan for instance, retained the bulk of the hardware from Jaguar, but the upgrade still took a year (Oct 2011-Nov 2012) and cost $96 million- the principle difference seems to be an upgrading of power delivery and swapping out Fermi 225W TDP boards for K20X (235W)- the CPU side of the compute node remains untouched.


----------



## repman244 (Nov 17, 2012)

HumanSmoke said:


> Titan for instance, retained the bulk of the hardware from Jaguar, but the upgrade still took a year (Oct 2011-Nov 2012) and cost $96 million- the principle difference seems to be an upgrading of power delivery and swapping out Fermi 225W TDP boards for K20X (235W)- the CPU side of the compute node remains untouched.



First phase were CPU upgrades (new Opterons), interconnects, and memory (600TB). After that they had to wait for the GPU's.
And IIRC Jaguar didn't have any GPU's before.


----------



## HumanSmoke (Nov 17, 2012)

repman244 said:


> First phase were CPU upgrades (new Opterons), interconnects, and memory (600TB). After that they had to wait for the GPU's.


Thanks. I'd forgotten about the 16GB RAM increase per node. Weren't the "old" CPU's (Opteron 2435) reallocated to what was ORNL's old XT4 partition to upgrade it to XT5 specification (Jaguar being a 18688 node XT5 + 7832 node XT4...the XT5 being upgraded to Titan (XK7) and the XT4 to XT5) and Kraken's upgrade (ORNL + University of Tennessee)? The partition is mentioned in the Jaguar wiki page, but not Titan. With the reallocation I was under the impression that ORNL's Opteron 6274's were basically overall additions to capacity at ORNL.


repman244 said:


> And IIRC Jaguar didn't have any GPU's before.


Actually a physical impossibility I would have thought. CPU-only clusters still need GPU's for visualization*, although the Fermi's were added when the CPU upgrade took place.


> Phase I of this upgrade also populated 960 of these XK6 nodes with NVIDIA Fermi GPUs.


[source]

*IIRC, The Intel Xeon + Xeon Phi Stampede also uses Tesla K20X for the same reason


----------



## eidairaman1 (Nov 17, 2012)

Learn to be respectful to members of these forums.



HumanSmoke said:


> Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the rack specification (more often than not) of 225W per board.
> 
> If a power limiter affected stated performance you'd have an argument, but as the case stands, you are making excuses not a valid point. And just for the record, the gaming charts don't have a direct bearing on server/WS/HPC parts- as I mentioned before, you can't get a true apples-to-apples comparison between gaming and pro parts- all they can do is provide an inkling into the efficiency of the GPU. If you want to use a gaming environment argument, why don't you take it to a gaming card thread, because it is nonsensical to apply it to co-processors.
> 
> ...


----------



## repman244 (Nov 17, 2012)

HumanSmoke said:


> Phase I of this upgrade also populated 960 of these XK6 nodes with NVIDIA Fermi GPUs.



Yeah, but that was already phase 1 upgrade to Titan, Jaguar itself didn't have them (maybe I didn't word my post very well, sorry).


----------



## HumanSmoke (Nov 17, 2012)

eidairaman1 said:


> Learn to be respectful to members of these forums.



Stay on topic and it shouldn't be a problem. If you can tell me how moaning about a lack of volt modding opportunity in Nvidia cards has any relevance to pro graphics -workstation or GPGPU, I'll gladly issue an apology....until that happens I view it as a cheap trolling attempt, not particularly apropos of anything regarding the hardware being discussed.


repman244 said:


> Yeah, but that was already phase 1 upgrade to Titan, Jaguar itself didn't have them (maybe I didn't word my post very well, sorry).



That's probably my confusion I think. I tend to think of Jaguar and Titan as the same beast, and didn't make the differentiation regarding timeline. My bad.


----------



## eidairaman1 (Nov 17, 2012)

HumanSmoke said:


> Stay on topic and it shouldn't be a problem. If you can tell me how moaning about a lack of volt modding opportunity in Nvidia cards has any relevance to pro graphics -workstation or GPGPU, I'll gladly issue an apology.
> 
> 
> That's probably my confusion I think. I tend to think of Jaguar and Titan as the same beast, and didn't make the differentiation regarding timeline. My bad.



i was stating they have tighter control of voltages across the board is all.


----------



## HumanSmoke (Nov 17, 2012)

eidairaman1 said:


> i was stating they have tighter control of voltages across the board is all.


Not quite...


eidairaman1 said:


> ya * NV forced any Voltage Mods out *(EVBot being the biggest example of this)



When have volt mods ever been an issue with server co-processors? How does Nvidia locking down voltages on desktop Kepler have any relevance to Tesla or Quadro boards ?
Have you ever heard of people who overclock a math co-processor ? Kind of defeats the purpose of using ECC RAM and placing an emphasis on FP64 don't ya think?


eidairaman1 said:


> Learn to be respectful to members of these forums.


Taking your lead ?....


eidairaman1 said:


> dont be a jack ass


:shadedshu


----------



## eidairaman1 (Nov 17, 2012)

HumanSmoke said:


> Not quite...
> 
> 
> When have volt mods ever been an issue with server co-processors? How does Nvidia locking down voltages on desktop Kepler have any relevance to Tesla or Quadro boards ?
> Have you ever heard of people who overclock a math co-processor ? Kind of defeats the purpose of using ECC RAM and placing an emphasis on FP64 don't ya think?



rolleyes:

i find it funny you keep on arguing, but anyways it was in relation as how those parts cant reach the maximum voltage level because of precautions. I know certain models of Quadro and FirePro are for mission critical, just as Much as Itanium/SPARC etc are. I do realize that OC can cause ECC to corrupt the data. But anyways im just saying be respectful of the users here dude.


----------



## Chicken Patty (Nov 17, 2012)

Back on track fellas, let's keep this thread rolling clean.


----------



## HumanSmoke (Nov 18, 2012)

KooKKiK said:


> TDP doesn't stand for the 'REAL' power consumption.
> and both companies do not measure TDP in the same way.
> that is my point.hope you understand.


I understand what you're saying, which is basically the printed specification doesn't match real world power usage. A fact that I think we are in agreement on. My point is that the printed specification for professional graphics and arithmetic co-processors is a guideline only, and that regardless of the stated number, I believe that one architecture is favoured over another with regards performance/watt.

HPCWire is of the same opinion- that is to say that Nvidia's GK110 has superior efficiency to that of the S10000 and Xeon Phi when judged on their own performance. Moreover, they believe that Beacon (Xeon Phi) and SANAM (S10000) only sit at the top of the Green500 list because of their asymmetrical configuration (very low CPU to GPU ratio)- something I also noted earlier.
(Source: HPCWire podcast link] 


Xzibit said:


> Thats a PCIe gen 2.0 slot incase you haven noticed
> Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability


225W through a PCI-E slot ? whatever.   (150W is max for a PCI-E slot. Join up and learn something)


Xzibit said:


> All Nvidias K20x & K20 are PCIe gen 2.0 spec.


Incorrect. K20/K20X are at present limited to PCI-E2.0 because of the AMD Opteron CPU's they are paired with (which of course are PCIe 2.0 limited). Validation for Xeon E5 (which is PCIe 3.0 capable) means GK110 is a PCIe 3.0 board...in much the same way that all the other Kepler parts are (K5000  and K10 for example). In much the same vein, you can't validate a HD 7970 or GTX 680 for PCI-E 3.0 operation on an AMD motherboard/CPU - all validation for AMD's HD 7000 series and Kepler was accomplished on Intel hardware.


----------



## Xzibit (Nov 18, 2012)

HumanSmoke said:


> 225W through a PCI-E slot ? whatever.   (150W is max for a PCI-E slot. Join up and learn something)



Wow, you are grasping at straws. I didnt specify power output but if it makes you feel good go right ahead. 



HumanSmoke said:


> Incorrect. K20/K20X are at present limited to PCI-E2.0 because of the AMD Opteron CPU's they are paired with (which of course are PCIe 2.0 limited). Validation for Xeon E5 (which is PCIe 3.0 capable) means GK110 is a PCIe 3.0 board...in much the same way that all the other Kepler parts are (K5000  and K10 for example). In much the same vein, you can't validate a HD 7970 or GTX 680 for PCI-E 3.0 operation on an AMD motherboard/CPU - all validation for AMD's HD 7000 series and Kepler was accomplished on Intel hardware.



Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot. 

*Nvidia GPU Accelerator Board Specifications*
Tesla K20X 
Tesla K20



> PCI Express Gen2 ×16 system interface



How many times is it now?
It seams you'll do and make up anything to cheerlead on Nvidias side even when its on there own website proving you wrong.  I hope they are paying you because if they arent its sad.
:shadedshu


Whos the troll now ? 



			
				HumanSmoke said:
			
		

> Verifiable numbers or STFU.





P.S.
-Still waiting on that 225w server specification link.


----------



## HumanSmoke (Nov 18, 2012)

Xzibit said:


> Wow, you are grasping at straws. I didnt specify power output but if it makes you feel good go right ahead.


Hey, you're the one that thinks a 225W card can draw all its power from the PCIe slot 


Xzibit said:


> Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot.


I'm pretty sure GK110 will be validated for PCI-E3.0 just as every other Kepler GPU before it. The validation process is (like X79) an Intel issue. Pity you can't get PCI-E 3.0 validation on an AMD chipset it would make life simpler. - Heise have already clarified the validation process for K20/K20X


> Both computational cards Nvidia precaution specified only for PCIe 2.0, because with Xeon E5 with some boards were still problems.  Nvidia underlined told heise online that the hardware support PCIe 3.0, the graphics card BIOS, the card sets but on PCIe 2.0.  It stands OEMs however, if for OEM systems K20-cards with "disconnection from the" use PCIe 3.0.  _via Google translate _





Xzibit said:


> -Still waiting on that 225w server specification link.


And I've already explained to you what was previously written by myself


HumanSmoke said:


> or in the case of servers/HPC, staying within the rack specification (*more often than not**) of 225W per board....What I mean, and Anand for that matter, is that server racks a*re more often than not optimized for 225W per PCIE unit*, both for cooling, power usage, and cabling. What's so hard to understand?





			
				 Ryan Smith-Anandtech said:
			
		

> K20X will be NVIDIA’s leading Tesla K20 product, offering the best performance at the highest power consumption (235W). K20 meanwhile will be cheaper, a bit slower, and perhaps most importantly lower power at 225W. On that note, despite the fact that the difference is all of 10W, 225W is a very important cutoff in the HPC space – *many servers and chasses are designed around that being their maximum TDP for PCIe cards *


Now. If you still plan on baiting I'll see what I can do about reporting your posting. You've already been told exactly the posting meant, and you still persevere in posting juvenile rejoinders based on faulty semantics (** *"How can "_more often than not_" be construed as a descriptor for an absolute specification for the industry ??? :shadedshu ) and an inability to parse a simple compound sentence.

Now if you don't think that server racks largely cater for a 225W TDP specced board I suggest you furnish some proof to the contrary (Hey, you could find all the vendors who spec their blades for 375W TDP boards for extra credit)...c'mon make a name for yourself, prove Ryan Smith at Anandtech wrong.  While your at it try to find where I made any reference about 225W being a server specification for add in boards. The only mention I made was regarding * boards with a 225W specification * being generally standardized for server racks.

Y'know nevermind. You made my ignore list


----------



## Xzibit (Nov 18, 2012)

Your something else for sure 



HumanSmoke said:


> Now. If you still plan on baiting I'll see what I can do about reporting your posting. You've already been told exactly the posting meant, and you still persevere in posting juvenile rejoinders based on faulty semantics (** *"How can "_more often than not_" be construed as a descriptor for an absolute specification for the industry ??? :shadedshu ) and an inability to parse a simple compound sentence.



Becarefull what you wish for. Moderators might find out about the majority of your post out side of Nvidia based threads are spent defaming the competition and others with differant views than yours.

Do you only read what you want ?.



HumanSmoke said:


> Servers and HPC racks in general are built around a 225W per board * specification*. Example HP , and from Anandtech...





HumanSmoke said:


> Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the *rack specification* (more often than not) of 225W per board.



You just can't own up to the fact that there is no specification and you implied as if there is one.

I was just asking to provide a link to such a specification since if there was one it be available to be referance from various crediable sources.

No link. no such thing.



HumanSmoke said:


> Hey, you're the one that thinks a 225W card can draw all its power from the PCIe slot



Really ? Still ? Even after you included this in the same post ?



HumanSmoke said:


> *you still persevere in posting juvenile rejoinders based on faulty semantics
> an inability to parse a simple compound sentence.*



Let me remind you of previous post in this thread I have made just to enlighten you since it seams you only read what you want 



Xzibit said:


> PCIe Gen 2 = 75w
> (2) 6-pin = 150w (75w each)
> Total = 225w
> Not a Server Specification





Xzibit said:


> Hey, smarty pants all those cards still use 6-pin and/or 8-pin AuX connectors.





Xzibit said:


> Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability





Xzibit said:


> Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot.
> 
> Nvidia GPU Accelerator Board Specifications
> Tesla K20X
> Tesla K20



Hmm... I referance PCIe Gen 2 power output + 6-pin power and mention there is a power difference from PCIe 2.0 to 2.1 & 3.0.  Oh yeah i'm also linking to Nvidias own web-site with specifications of 2 cards with diagrams of AuX connectors and how they should be used

And your conclusion is that I thought PCIe slot was the sole source of power 

Like I said several times before.  *Follow your own advise* cause your something else.



HumanSmoke said:


> I'm pretty sure GK110 will be validated for PCI-E3.0 just as every other Kepler GPU before it. The validation process is (like X79) an Intel issue. Pity you can't get PCI-E 3.0 validation on an AMD chipset it would make life simpler. - Heise have already clarified the validation process for K20/K20X



Speculation is fine but if I have to choose between your speculation to what Nvidia has posted on there specification sheets.

I'll believe Nvidia 



HumanSmoke said:


> Now if you don't think that server racks largely cater for a 225W TDP specced board I suggest you furnish some proof to the contrary (Hey, you could find all the vendors who spec their blades for 375W TDP boards for extra credit)...c'mon make a name for yourself, prove Ryan Smith at Anandtech wrong.  While your at it try to find where I made any reference about 225W being a server specification for add in boards. The only mention I made was regarding * boards with a 225W specification * being generally standardized for server racks.



Classic troll move  I cant provide proof to what I say so why dont you disprove it. 

There is more than just one company.  Its a shame you spent all your time just trolling for Nvidia.  

You shouldnt get mad when your wrong. When your wrong your wrong. Move on dont make up stuff or lash out at people who pointed something you didnt like.  Provide credable links to back up your views.

Being hostle towards others with a different view then yours is no way to enhance the community in this forum.  No reason to jump into non-Nvidia threads and start disparaging it or its posters because you didnt like its content or someone doesnt like the same company you do as much as you.



Think i'll go have me some hot coco.


----------



## KooKKiK (Nov 18, 2012)

HumanSmoke said:


> I understand what you're saying, which is basically the printed specification doesn't match real world power usage. A fact that I think we are in agreement on. My point is that the printed specification for professional graphics and arithmetic co-processors is a guideline only, and that regardless of the stated number, I believe that one architecture is favoured over another with regards performance/watt.
> 
> HPCWire is of the same opinion- that is to say that Nvidia's GK110 has superior efficiency to that of the S10000 and Xeon Phi when judged on their own performance. Moreover, they believe that Beacon (Xeon Phi) and SANAM (S10000) only sit at the top of the Green500 list because of their asymmetrical configuration (very low CPU to GPU ratio)- something I also noted earlier.



ok, show me the real power consumption test and i will believe you.


not that old and completely wrong argument repeating again. 



> Single precision......................................... ..............Double precision
> W/S9000.(225W)...3.23FTlop....14.36 GFlop/watt.......0.81 TFlop.....3.58 GFlop/watt
> S10000...(375W)...5.91TFlop....15.76 GFlop/watt........1.48 TFlop....3.95 GFlop/watt
> K10........(225W)...4.85TFlop....21.56 GFlop/watt........0.19 TFlop.... Negligable
> ...


----------



## Frick (Nov 18, 2012)

I've actually read the entire thread and it feels like you're not talking (typing? tylking?) to each other but *over *each other. It's quite funny actually.


----------



## HumanSmoke (Nov 18, 2012)

KooKKiK said:


> ok, show me the real power consumption test and i will believe you


Sure- here you go. Southern Islands FirePro vs Kepler Quadro


----------

