# NVIDIA Maximus Fuels Workstation Revolution With Kepler Architecture



## Cristian_25H (Aug 7, 2012)

NVIDIA today launched the second generation of its breakthrough workstation platform, NVIDIA Maximus, featuring Kepler, the fastest, most efficient GPU architecture.

The Maximus platform, introduced in November, gives workstation users the ability to simultaneously perform complex analysis and visualization on a single machine. Now supported by Kepler-based GPUs, Maximus delivers unparalleled performance and efficiency to professionals in fields as varied as manufacturing, visual effects and oil exploration.



 

 

 





Maximus initially broke new ground as a single system that handles interactive graphics and the compute-intensive number crunching required to simulate or render them -- resulting in dramatically accelerated workflows. With this second generation of Maximus, compute work is assigned to run on the new NVIDIA Tesla K20 GPU computing accelerator, freeing up the new NVIDIA Quadro K5000 GPU to handle graphics functions. Maximus unified technology transparently and automatically assigns visualization and simulation or rendering work to the right processor.

"With the parallel processing capabilities enabled by NVIDIA Maximus systems, we can now be 10 times more creative," said Alan Barrington, a designer at the Mercedes-Benz Advanced Design Center California. "With the NVIDIA Maximus-powered environment, we can continue to refine and improve our design, right up to the last minute. We can stay efficient and multitask. We no longer have to settle for less or to compromise on our creativity."

*NVIDIA Maximus: Boosting Graphics and Compute*
Powered by the Kepler architecture, the second generation of Maximus improves both the visualization and computation capabilities of the platform.

Key NVIDIA Quadro K5000 GPU features include:

 ● Bindless Textures that give users the ability to reference over 1 million textures directly in memory while reducing CPU overhead
 ● FXAA/TXAA film-style anti-aliasing technologies for outstanding image quality
 ● Increased frame buffer capacity of 4 GB, plus a next-generation PCIe-3 bus interconnect that accelerates data movement by 2x compared with PCIe-2
 ● An all-new display engine capable of driving up to four displays simultaneously with a single K5000
 ● Display Port 1.2 support for resolutions up to 3840x2160 @60Hz

Key NVIDIA Tesla K20 GPU features include:

 ● SMX streaming multiprocessor technology for up to a 3x performance per watt advantage
 ● Dynamic Parallelism and Hyper-Q GPU technologies for simplified parallel programming and dramatically faster performance

*Transforming Workflows Across Industries, From Jet Engine Design to Seismic Analysis*
Here are some examples of how Maximus is transforming workflows across industries:

 ● For the manufacturing and design industry, NVIDIA Maximus-powered workstations enable professionals to design without limits on size of assemblies, number of components, image quality, or resolution. Designers can use real-world physics, lighting, and materials during interactive design, and visualize with photo-realistic image quality.

"RTT DeltaGen offers custom features such as rapid raytracing, rendering and scalability, automated layer rendering, and computational fluid dynamics visualization and analysis," said Peter Stevenson, CEO, RTT USA, Inc. "Maximus second generation is remarkable, forward-thinking technology that will further empower our clients by providing them with the ability to do interactive design and simulation, which will accelerate their time to insight of their engineering data so they can make final design decisions even faster."

 ● For the media and entertainment industry, Maximus gives digital content creators more freedom and creative flexibility. Film editors and animators can work in real-time on their most challenging projects, create complex simulations and interactive visual effects, and work in 3D texture painting workflows without being constrained by a maximum number of textures.

Chaos Group provides state of the art rendering solutions for visual effects, film, media and entertainment, and design industries. V-Ray RT is a powerful, interactive raytracing render engine optimized for NVIDIA CUDA architecture that changes the way 3D artists and visualization specialists approach the lighting and shading setup.

"We're constantly working to ensure we create the best tools for customer workflows," said Lon Grohs, vice president, Business Development at Chaos Group. "Our CUDA based V-Ray plug-in for Maya is one of our latest developments to meet the needs of the most demanding VFX and film studios around the world, and with a Kepler-based NVIDIA Maximus system, 3D artists will spend less time waiting and more time being creative."

Home of some of the industry's most talented artists, a52 is an innovative visual effects studio located in Santa Monica, CA that has created many impressive effects through the seamless integration of 2D and photoreal CGI.

"We now have the opportunity to produce more iterations of color and lighting to get to where we want faster," said Chris Janney, VFX technical director, a52. "With faster turnaround, we can submit shots much sooner for client approvals. I wouldn't hesitate to recommend a Maximus setup particularly for artists working in V-Ray RT -- the time savings alone are significant, but it's also allowing our artists a better workflow in the creative process, without long pauses for renders. That is where the Maximus setup really helps our look development process."

 ● For geophysicists and seismologists, NVIDIA Maximus-powered workstations give more accurate data, in less time, on the location of oil and gas deposits around the world.

Paradigm is a global provider of analytical and information management solutions for the oil and gas and mining industries. Paradigm software enables users to locate new oil and gas reservoirs, create dynamic digital models of the earth's surface, and optimize production from new and existing reservoirs.

Its Paradigm 2011.1, a comprehensive application suite of exploration, development and production applications, provides accelerated computation of seismic trace attributes through use of NVIDIA Maximus technology.

"Paradigm software leveraging Maximus technology is an innovative implementation that enables seismic interpreters to calculate seismic trace attributes at their desktop in interactive or dramatically reduced times," said Laura Evins, product manager of seismic attributes, Paradigm. "This provides huge benefits to our oil and gas clients, as they can now more quickly recover structural or depositional features from seismic data. We believe the second generation of Maximus will accelerate their time to discovery even further, making our combined technology even more cost effective."

*Availability and Pricing*
Second generation NVIDIA Maximus-powered desktop workstations featuring the new NVIDIA Quadro K5000 ($2,249 MSRP, USD) plus the new NVIDIA Tesla K20 GPU ($3,199 MSRP, USD) will be available starting in December 2012. The NVIDIA Quadro K5000 will be available as a separate discrete desktop GPU starting in October 2012.

*ISV Certifications and Support*
Leading software vendors certify and support NVIDIA Maximus-powered workstations, including Adobe, ANSYS, Autodesk, Bunkspeed, Dassault Systèmes, MathWorks and Paradigm.

*Workstation OEM Support*
The world's leading workstation OEMs -- including HP, Dell, Lenovo, and Fujitsu, plus systems integrators such as BOXX Technologies and Supermicro -- will offer second generation NVIDIA Maximus-powered workstations.

 ● New NVIDIA Maximus-powered HP Z Workstations will include the HP Z420, Z620 and HP's ultimate workstation, the Z820.

"HP customers lead their industries, pushing the limits of technology to help bring consumers the next big blockbuster, alternative energy resources and medical advancements that would otherwise not be possible," said Jeff Wood, vice president, Worldwide Marketing, Commercial Solutions, HP. "This next generation of NVIDIA Maximus technology will provide the crucial horsepower and productivity demands of compute-intensive modern workflows, increasing productivity and ultimately ROI for our customers."

 ● New NVIDIA Maximus-powered Dell Precision T3600, T5600 and T7600 tower, and R5500 rack workstations will be available worldwide early next year.

"Dell Precision workstations with the second generation of NVIDIA Maximus make the promise of designing at the speed of thought a reality for creative and design professionals," said Efrain Rovira, executive director, Dell Precision Workstations. "NVIDIA's fast Kepler GPU architecture combined with our most powerful tower and rack workstations provides unprecedented visual design and simulation performance for our customers."

 ● New NVIDIA Maximus-powered Lenovo ThinkStation S30, D30, and C30 workstations will be available worldwide.

"Mission-critical design applications and simulation workflows are being accelerated like never before with NVIDIA Maximus technology," said Rob Herman, director of Product and Vertical Solutions, ThinkStation Business Unit. "With new, next generation NVIDIA Maximus-powered ThinkStations, users have even more parallel processing horsepower to boost their productivity, creativity, and time-to-market. Our customers can look forward to improved computing and visualization capabilities that empower them to achieve results faster than ever."

 ● New Fujitsu CELSIUS M720 and R920 NVIDIA Maximus-powered desktop workstations will be available in EMEA, India and Japan.

"With the next-generation of NVIDIA Maximus technology powering our Fujitsu CELSIUS desktop workstations, we continue to provide the most innovative technology for accelerating modern workflows that utilize high-performance 3D modeling, animation, real-time visualization, analysis, and simulation applications," said Dieter Heiss, vice president, Workplace Systems Product Development Group, at Fujitsu Technology Solutions. "These new systems will provide the highest levels of performance that professionals need."

*View at TechPowerUp Main Site*


----------



## NHKS (Aug 7, 2012)

Cristian_25H said:


> *Availability and Pricing*
> the new NVIDIA Tesla K20 GPU ($3,199 MSRP, USD) will be available starting in December 2012.



that means the GK110... and sooner or later the GTX 7xx series that will also use it..


----------



## renz496 (Aug 7, 2012)

so can we expect GTX780 around december / jan 2013 now?


----------



## dj-electric (Aug 7, 2012)

Seems like hardware manifactures really run out of names these days...


----------



## NHKS (Aug 7, 2012)

renz496 said:


> so can we expect GTX780 around december / jan 2013 now?



 sometime Jan-Mar 2013 is likely.. unless nv changes its plans




Dj-ElectriC said:


> Seems like hardware manifactures really run out of names these days...



and when they do they just add suffix / prefix to the existing names.. like K5000 >> "kepler"5000 to succeed the existing 5000??!


----------



## Casecutter (Aug 7, 2012)

I don't know how anyone translates all this mumbo-jumbo in this, as indicating it’s a GK100 foundation  or that this signifies there’s going to be a “Big Kepler” offered to anyone other than corporate clients.  While it could be, because saying “computation capabilities of the platform” and then later denotes “dramatically faster”…. given what we know from the GK104 that may point to something.

Here's what I read...


Cristian_25H said:


> ability to simultaneously perform complex analysis and visualization on a single machine... With this second generation of Maximus, compute work is assigned to run on the new NVIDIA Tesla K20 GPU computing accelerator, freeing up the new NVIDIA Quadro K5000 GPU to handle graphics functions. Maximus _unified technology _transparently and automatically _assign_s visualization and simulation or rendering work _to the right processor_.



Sounds like two independent processors?


----------



## NHKS (Aug 7, 2012)

^ maybe u should see this... 
http://www.nvidia.com/content/tesla/pdf/NV_DS_TeslaK_Family_May_2012_LR.pdf

K10   - 2x GK104
K20   - 1x GK110
K5000- 1x GK104 

GK110 being used in GTX 7xx series is just a common speculation since it showed up in nv material... it might be true or turn out as workstation exclusive chip - former is more likely as it is a big chip and nv probably wanted the 28nm process to mature before using it..

as for Maximus tech.. its not new.. its like Optimus tech(switch iGP & GPU) from nv.. in this case, it switches between 'design/graphics' & 'compute' process if the workstation has both Quadro(design) & Tesla(compute) cards.. kind of like a hybrid SLI..


----------



## Mistral (Aug 7, 2012)

Knee-jerk reaction to the AMD announcement from earlier?

In any case, I call BS on that plane. With all the crud attached to it it won't even be able to take off.


----------



## TheMailMan78 (Aug 7, 2012)

Mistral said:


> Knee-jerk reaction to the AMD announcement from earlier?
> 
> In any case, I call BS on that plane. With all the crud attached to it it won't even be able to take off.



A1- Sky Raider could.


----------



## SIGSEGV (Aug 7, 2012)

Mistral said:


> Knee-jerk reaction to the AMD announcement from earlier?
> 
> In any case, I call BS on that plane. With all the crud attached to it it won't even be able to take off.



lmao, 
even those products still on their dream and will be released around next 4-5 months from now, meh..


----------



## Fluffmeister (Aug 7, 2012)

Nv have a wide and varied customer base and stacks of cards will already be sold to existing customers.

What people think here is pretty much irrelevant.


----------



## wolf (Aug 7, 2012)

NHKS said:


> that means the GK110... and sooner or later the GTX 7xx series that will also use it..





renz496 said:


> so can we expect GTX780 around december / jan 2013 now?



read boys!



> The NVIDIA Quadro K5000 will be available as a separate discrete desktop GPU starting in October 2012.



enter GK110 GPU


----------



## sergionography (Aug 7, 2012)

and this is were all the nvidia's big talk about how great kepler is comes to an end lol, amd's mid range chips offer better compute performance than nvidias best

http://semiaccurate.com/2012/08/07/amd-firepro-2012-lineup/
2.4 tf for pitcairn

http://www.nvidia.com/object/quadro-k5000.html#pdpContent=2
2.1tf for gk104

and pitcairn is a 212mm2 die while gk104 is a 300mm2 die
so even if nvidia does bring a big kepler chip it will still be behind unless they do a good deal of tweaking within the architecture, but then again, amd is doing the same thing


----------



## Steevo (Aug 7, 2012)

TheMailMan78 said:


> A1- Sky Raider could.



Farva.....


----------



## Xzibit (Aug 7, 2012)

So this card @ $2,249 MSRP, USD is suppose to compete against AMD FirePro W7000 @ $899 MSRP, USD.

Talk about a price premium...

Nvidia Quadro K5000
2.1 TFLOPs @ SP

AMD FirePro W7000
2.5 TFLOPs @ SP


----------



## GC_PaNzerFIN (Aug 7, 2012)

Can't wait for the GeForce parts. GK104 was/is good, but I want much faster single GPUs. This should do it.


----------



## Benetanegia (Aug 7, 2012)

sergionography said:


> and this is were all the nvidia's big talk about how great kepler is comes to an end lol, amd's mid range chips offer better compute performance than nvidias best
> 
> http://semiaccurate.com/2012/08/07/amd-firepro-2012-lineup/
> 2.4 tf for pitcairn
> ...





Xzibit said:


> So this card @ $2,249 MSRP, USD is suppose to compete against AMD FirePro W7000 @ $899 MSRP, USD.
> 
> Talk about a price premium...
> 
> ...



Yeah and a Fermi card with little more than 1 TFLOP pwns both, so maybe maximum theoretical TFLOPs are not the be all end all of computing. Get a clue...


----------



## sanadanosa (Aug 7, 2012)

anyone notice that this card has small pcb like GTX 670?


----------



## NC37 (Aug 7, 2012)

Just gotta say it, can't help it. The plane in that first image...impossible to fly with that much firepower on it. No way it could even get off the ground. WWI fighter planes in the background would chew it to pieces.


----------



## HumanSmoke (Aug 7, 2012)

NC37 said:


> Just gotta say it, can't help it. The plane in that first image...impossible to fly with that much firepower on it. No way it could even get off the ground. WWI fighter planes in the background would chew it to pieces.


That because video games and tech demonstrators always represent reality? Let me guess, you send a weekly email to Unigine to berate them about the the impossibility of floating islands.
Just for the record- the aircraft seems modelled on the Granville Brothers Gee Bee Model Z which first flew in 1931






The underwing missiles appear to be Raytheon AIM-9 Sidewinder (1956-present), and the accompanying biplanes (Hawker Hardy's?) have no offensive armament apparent. No doubt the pilots are all sporting non-realistic prosthetic appendages and phasers not set to "stun"

And just for the record, here's an A-1 Skyraider (as mentioned by TheMailman) with less than half it's stores pylons occupied. Similarly/heavier equipped Skyraiders recorded air-to-air dogfight kills against NVAF MiG-17 fighters during the Vietnam War.
And just to get us back on topic


Xzibit said:


> So this card @ $2,249 MSRP, USD is suppose to compete against AMD FirePro W7000 @ $899 MSRP, USD.
> Talk about a price premium...
> Nvidia Quadro K5000
> 2.1 TFLOPs @ SP
> ...


Means little or nothing.
Case in point; My partner works at the my districts hospital- in the medical imaging department. She tells me that AMD did not even compete for the image computation contract. Not because they have don't have a graphics compute solution, but because the contract stipulates 24/7 support (on site as well as instant replacement of parts, 24/7 phone and guaranteed replacement stock for 3-5 years) and a coherant and all encompassing software enviroment...so along with the Tesla/Quadro heavy lifting required for MRI, CT, PET and SPECT (her area of expertise), Nvidia also gets the contract for hundreds of laptop GPU's that technicians/doctors/maintenance use - fixed workstation being largely phased out. Factor in inertia - how often big business upgrades its systems, retraining, and a reluctance to move away from technology/support which already delivers and you should be able to see that *theoretical* compute numbers mean sweet fuck all.

AMD are basically reinforcing that view. Look at AMD's WS card press release- I count SIX distinct references to "our competitor/competition" - worried much? AMD referenced *one* design win (Supermicro)- clearly, if AMD's solution (as opposed to GPU) was so much superior, surely they would be counting HP, Dell, Lenovo et al as already on board ?


----------



## bogami (Aug 8, 2012)

Looks as GTX 670 less.Looks that nVidia wants to sell ass k104 GPU for best Keplar  GPU unit for long as they can. AMD simply can not compete without regard to 7000 series ,egardless of the real more powerful GPU with drivers problems (50% increase teselera.!..).This is the real reason or not. I believe that the K110(80% more) certainly designed and tested to become more heated than K104 .. but that is the profit made ​​by the current model middlel class more important because it costs cheaper to produce GPU with fewer features and less downtime production. , But that simply waits micron technology to reduced to (18um) to solve overheating in the busiest of such high frequencies. Maybe they need to adjust the maker of 28 micron processors, which shields has better agreement with AMD. would be good to get the right answers. The official news is apsolutno vague and misleading. Europe GTX690 sales of € 1,050 which is approximately $ 1,250.I  will not say that the GTX 680 GPU with k104 is not a  good preforment.Give much more than we can expect in the forecasts (x3000DX11). But now I think that the INvidia team sat on their hands and count the money, that they  will do so and has not improved design card or line drivers for many systems where  perform SLI in games or with half the power rating (590GTX SLI ....), despite the growing use of thundrtbolt plugins can not find the path where nVidia design could easily put six outputs in one series. Thus, one can see all the thicker cards and unnecessarily capture adjacent slots which could be released by the installation of water block .In  ATI  so expensive card than the K5000 could such improvements have been completely taken for granted ATI is displayed with its W9000 model! (lacking only water block). Radiator and pump you certainly can afford to those who will have money for that expensive buy .O sorry that has accumulated so much but does not justify the price of this product.


----------



## TheMailMan78 (Aug 8, 2012)

Steevo said:


> Farva.....



Don't understand.



NC37 said:


> Just gotta say it, can't help it. The plane in that first image...impossible to fly with that much firepower on it. No way it could even get off the ground. WWI fighter planes in the background would chew it to pieces.



A-1 Sky Raider did and it shot down jets during Nam!






This one here is just a nice painting but it gives you an idea.






Anyway we better get back on topic. Just saying there are some pretty crazy things in the past we used.


----------



## HumanSmoke (Aug 8, 2012)

TheMailMan78 said:


> A-1 Sky Raider did and it shot down jets during Nam!.


Aye. MiG-17 kills by Skyraiders are confirmed for VA-25 (U.S. Naval Attack Squadron 25) flown by C Johnson and C.Hartmann, and VA-176 (flown by W.Patton) on 20 June 1965 and 9 Oct 1966 respectively. Both kills using 20mm cannon fire.

An ungainly appearance isn't an absolute indicator of effectiveness.


----------



## Recus (Aug 8, 2012)

Quadro already running, while AMD can't make Firepro commercials for enormous power consumption. 
	

	
	
		
		

		
		
	


	












[yt]-zy_nKGsB68[/yt]



Xzibit said:


> So this card @ $2,249 MSRP, USD is suppose to compete against AMD FirePro W7000 @ $899 MSRP, USD.








Firepro W9000 - $3,999.


----------



## sergionography (Aug 8, 2012)

Benetanegia said:


> Yeah and a Fermi card with little more than 1 TFLOP pwns both, so maybe maximum theoretical TFLOPs are not the be all end all of computing. Get a clue...


Waaaat? Were in the world do you get your info from? I'm yet to see a compute benchmark with Fermi beating gcn, unless an app is heavily optimized for cuda its very unlikely, anything opencl is amd for the win.
Theoretical compute figures demonstrate the peak computational ability in a perfect world I understand that point, but I'm aware that gcn is more efficient in general compute than anything NVIDIA has, if anything is holding their professional business back is probably their customer support, but considering the fact they waited this long to release these cards even though they had the designs readily and selling in the mainstream tells me they made sure stability and drivers were optimized, but that part is my optimistic assumption.


----------



## Casecutter (Aug 8, 2012)

The Skyraider - The Warhog before there was an A10



Recus said:


> Quadro already running, while AMD can't make Firepro commercials for enormous power consumption


And Nvidia has fanciful old planes while no product till Q4, if… The saga of Kepler continues… dragging on! 

As to compute for what business actual use such cards for, I’d concur AMD's GCN performance/Watt is hard to hard to be incredulous. And yes let hope the driver team that was working monthly for the desktop updates has now been allocated to workstation and really succeeding in augmenting that software's support.


----------



## Steevo (Aug 8, 2012)

TheMailMan78 said:


> Don't understand.
> 
> 
> .



Farva: It doesn't matter cause I'm going to win ten million dollars.
Thorny: What are you going to do with ten million dollars, and you can't say buy the Cleveland Cavaliers.
Farva: I'd buy a ten million dollar car.
Thorny: That's a good investment but I'd still pull you over.
Farva: Bull Shit. You couldn't pull me over, and even if you did I'd activate my car's wings and I'd fly away.


Are we really arguing about a card that hasn't even seen the light of day as far as we know? Fanboyisim at its best worst.


----------



## TheMailMan78 (Aug 8, 2012)

Steevo said:


> Farva: It doesn't matter cause I'm going to win ten million dollars.
> Thorny: What are you going to do with ten million dollars, and you can't say buy the Cleveland Cavaliers.
> Farva: I'd buy a ten million dollar car.
> Thorny: That's a good investment but I'd still pull you over.
> ...



I still have no clue what you just said. But I do like the idea of a flying car.


----------



## Steevo (Aug 8, 2012)

TheMailMan78 said:


> I still have no clue what you just said. But I do like the idea of a flying car.




I feel sorry for your soul.


Super Troopers, watch it.


----------



## TheMailMan78 (Aug 8, 2012)

Steevo said:


> I feel sorry for your soul.
> 
> 
> Super Troopers, watch it.



I have a busy life man. I just saw Sling Blade for the first time this past week. But Ill check it out. 

Anyway we should get back on topic.
Does anyone get the feeling NVIDIA and AMD charge more for "pro" parts for no good reason other then its a industrial product?


----------



## Benetanegia (Aug 8, 2012)

sergionography said:


> Waaaat? Were in the world do you get your info from? I'm yet to see a compute benchmark with Fermi beating gcn, unless an app is heavily optimized for cuda its very unlikely, anything opencl is amd for the win.
> Theoretical compute figures demonstrate the peak computational ability in a perfect world I understand that point, but I'm aware that gcn is more efficient in general compute than anything NVIDIA has, if anything is holding their professional business back is probably their customer support, but considering the fact they waited this long to release these cards even though they had the designs readily and selling in the mainstream tells me they made sure stability and drivers were optimized, but that part is my optimistic assumption.



First of all my comment was clearly not meant to say that Fermi rules or anything. Just that Tflops means next to nothing as can be seen by Fermi cards beating some GCN based cards with twice as high maximum theoretical flops. AMD HD6000 and HD5000 had very high TFlops too and were piss poor for compute.













As you can see above the Fermi card can beat every other card (AESEncrypt) and pwns HD5000 and 6000 cards, all of which have 2x or even 3x as high maximum theoretical flops. And in DirectX11 compute Kepler apparently pwns. We could try to conclude several things from this, but the only real thing is that GPU compute is still in its infancy and it heavily depends on which app has been optimized for which GPU or which architecture and most consumer level apps and benchmarks are simply not properly optimized for every card and architecture. It's a hit or miss thing.

The GCN that handily beats Fermi is not the $899 card, it's the $3999, big difference. Against the $899 the top Fermi card wins as many available benches as it loses, which demostrates that you just can't simply compare 2 architectures based on an arbitrary metric such as maximum theoretical flops. I could design a GPU with 200.000 very simple shaders and it would pwn everything on Earth in that metric, but it would mean nothing unless the ISA was complex enough to actually do something with those ALUs, unless I had a cache and memory system capable of feeding it etc, etc.

Comparing 2 different GPU architectures' compute capabilities based on TFLOPs is much worse than comparing CPUs such as Sandy Bridge and Bulldozer, because CPUs are much more efficient at reaching their maximum flops than GPUs and thus difference from optimitations are inherently smaller. And yet comaring 2 CPU architectures based on their maximum theoretical numbers is stupid and useless, comparing GPUs is simply the next level of bad.


----------



## Xzibit (Aug 8, 2012)

So in these charts, we can assume that your comparing.

GK104s FP32 1/24 to Tahiti FP64 1/4, SP & DP ?

I would assume so since these GPGPUs are aimed at professionals but i'd hate to find out your using these gaming charts/file encryption that dont take the benefits of a professional card in to account to exacerbate your point.


----------



## Benetanegia (Aug 8, 2012)

Xzibit said:


> So in these charts, we can assume that your comparing.
> 
> GK104s FP32 1/24 to Tahiti FP64 1/4, SP & DP ?
> 
> I would assume so since these GPGPUs are aimed at professionals but i'd hate to find out your using these gaming charts/file encryption that dont take the benefits of a professional card in to account to exacerbate your point.



If you learn to read the OP. GK104 is used in Quadro and FP64 compute ability has little to no use here.

In fact, the cards mentioned/compared by both of you are visualization cards and not compute cards. Their value is weighted in their ability to render triangles and pixels, both tasks at which GK104 excels and in the case of triangles/primitives by a big big real big margin, so yes let's take the actual benefits into account shall we? Or we can continue making a false claim based on arbitrary numbers that mean absolute nothing in the real world and pretend the value of a card must be weighted based on that false principle.

GK110 the actual compute chip will have at least 1/2 FP64 rate (plus a shitton of Tflops), but 2/3 and 1/1 has been mentioned too. But the fact is that the position of the professional compute world has never been clearer than when they asked/requested for a GK104 based card, because you know, 64 bit is not something EVERYBODY needs or wants, professional or not. Having one chip with 64 bit is required if you want to compete, yes, and if you have only one suitable high-end chip, that chip will require FP64 performance, of course, but if you have 2 winners on the top, not all of your chips need to be. In the end reality wins and professionals have been chosing their prefered solution and THAT is the best indication of which platform is ultimately the better one.


----------



## HumanSmoke (Aug 8, 2012)

Casecutter said:


> And Nvidia has fanciful old planes while no product till Q4, if… The saga of Kepler continues… dragging on!


Yup. Nvidia must be gutted.


> The  momentum AMD had been able to muster in the related market for professional graphics hardware has not only petered out, it's now turned backward. AMD's FirePro brand did not gain on Nvidia's Quadro in Q4'11, rather it went in reverse, coming in at 18.4%. Worse, in the first quarter of 2012, AMD's taken a bigger step backward, with FirePro now falling to just 15% of units.


[JPR workstation report]


Casecutter said:


> As to compute for what business actual use such cards for, I’d concur AMD's GCN performance/Watt is hard to hard to be incredulous


Yup, and a top fuel dragster can cover a quarter mile in less than four seconds. They aren't first choice for picking up the kids from school or commuting to work. I suspect AMD could whup Nvidia for 2-3 generations of pro graphics on sheer performance (tho' I suspect they wont as AMD are still angling for the APU as their go-to product line) but it still wouldn't make a dent in the overall market share - too many off the shelf solutions that work, too many high powered/high dollar partnerships. AMD is playing catch-up from a long way behind- and the actual hardware is the least of the issues.


Casecutter said:


> And yes let hope the driver team that was working monthly for the desktop updates has now been allocated to workstation and really succeeding in augmenting that software's support.


Ah, the $64,000 question. This is a company that habitually release desktop cards with unoptimized drivers (imagine if Tahiti had access to Cat 12.7 at launch), touted VCE as a major bullet point for the HD 7000 then took 7+ months to actually get it working. Add in the fact that AMD have never made a secret that they aren't a software company- whereas their competition has demonstratively shown that they are, and I suspect AMD are going to need to call on a lot more than their driver team. AMD when all is said and done, is still reliant upon third parties to weigh in for software integration.

Now, with Nvidia owning 85% of the pro graphics market (and likely more of HPC)- and a record of GPGPU prioritization, do you think it likely that they will relinquish their position? and that they don't have a solution in place to succeed existing Fermi/Tesla ?


----------



## Xzibit (Aug 9, 2012)

Benetanegia said:


> If you learn to read the OP. GK104 is used in Quadro and FP64 compute ability has little to no use here.
> 
> In fact, the cards mentioned/compared by both of you are visualization cards and not compute cards. Their value is weighted in their ability to render triangles and pixels, both tasks at which GK104 excels and in the case of triangles/primitives by a big big real big margin, so yes let's take the actual benefits into account shall we? Or we can continue making a false claim based on arbitrary numbers that mean absolute nothing in the real world and pretend the value of a card must be weighted based on that false principle.
> 
> GK110 the actual compute chip will have at least 1/2 FP64 rate (plus a shitton of Tflops), but 2/3 and 1/1 has been mentioned too. But the fact is that the position of the professional compute world has never been clearer than when they asked/requested for a GK104 based card, because you know, 64 bit is not something EVERYBODY needs or wants, professional or not. Having one chip with 64 bit is required if you want to compete, yes, and if you have only one suitable high-end chip, that chip will require FP64 performance, of course, but if you have 2 winners on the top, not all of your chips need to be. In the end reality wins and professionals have been chosing their prefered solution and THAT is the best indication of which platform is ultimately the better one.



I'm sure you will correct me if i'm wrong 

AMD FirePro (W & S series)
*9000 - K20 (MSRP $3,999 vs $3,199)
*8000 - K10 (MSRP $1,599 vs $2,249) <- This is the comparison in question performance/price and if so the W8000 has 4x throughput then the K5000
*7000 - K10 & NVS (MSRP $899 vs n/a)
*5000 - NVS (MSRP $599 vs n/a)
*x00 - The hundred denotation will be Entry level (MSRP Sub $599 vs n/a)

So when i did my comparison i was being generous.  I dont doubt the K20 will be better then *9000 *if* Nvidia does come through with what is said it would do but since they decided to use the GK104 in quadro its expectations can go either way.

For your assertion of FP64.  Just cause you use FP64 doesnt mean your locked into it.  Companies may need the efficiency of a FP64 DP for just One step of their development process where the margin of error needs to be of that standard and then do the rest of the work on FP32 SP.  It be silly for them to have them seperate just for that purpose.

Can you provide resources to your _*big big real big margin *_ & _*plus a shitton of Tflops*_ referance. It sounds so scientific.


----------



## Recus (Aug 9, 2012)

Top Firepro W9000 bites the dust by previous gen Quadro 6000. 
	

	
	
		
		

		
		
	


	




http://hothardware.com/Reviews/AMDs-New-FirePro-W8000-W9000-Challenge-Nvidias-Quadro/?page=1


----------



## Frick (Aug 9, 2012)

Recus said:


> Top Firepro W9000 bites the dust by previous gen Quadro 6000. http://i56.tinypic.com/2e1sp3m.gif
> 
> http://hothardware.com/Reviews/AMDs-New-FirePro-W8000-W9000-Challenge-Nvidias-Quadro/?page=1



Ouchouchouch. That is quite bad. Lets hope it's drivers.


----------



## Benetanegia (Aug 9, 2012)

Xzibit said:


> I'm sure you will correct me if i'm wrong
> 
> AMD FirePro (W & S series)
> *9000 - K20 (MSRP $3,999 vs $3,199)
> ...



Bla bla bla you keep insisting on comparing them based on the arbitrary value of maximum theoretical Tflops or as you named it throughput. I don't want to spend any more time on you. The link by Recus is more than enough. Now continue on insisting that Tflops == performance all you want, all day long if you want. 

As to my comment saying "big big real big" margin in triangles/primitives I assumed I was speaking to informed people and it was not necessary to cite known numbers. Since you know the Tflops of each cards so well I think it was safe to assume you knew other specs too. But I should have realized you don't or you probably wouldn't insist on that single one metric so much.

Sigh... anyway, Gk104 and GF110 do 4 triangles per clock; Cayman, Pitcairn and Tahiti do 2 per clock. The result is this:

Theoretical:





On practice. One implementation, for other examples look at Recus' link.





I'd say more than 2x-3x the throughput can be labeled as real big.

The mistery that Hothardware didn't seem to find a response to in their "These results make little sense" section is not such a mistery once you know what you're talking about and where to look for a bottleneck.


----------



## Xzibit (Aug 9, 2012)

Oh snap, Benchmarks that dont take Fp32 or Fp64 SP, DP into account

http://www.tomshardware.com/reviews/geforce-gtx-690-benchmark,3193-12.html

What good is that throughput if its not accurate or up to the specified standards ?

By all means buy a graphic card right ?


Show me some more benchamrks please 

I atleast have to give credit to Recus for providing a subtanent link.  Something you never do.  Might want to bookmark that one Benetanegia.  You also might not want to lead with the whole Driver Optimization thing since it will disprove the advantage of wheather test where done CUDA vs OpenGL and such.  I'm sure you know since you spend a good amount on talking about application, driver optimization when Nvidia is on top right ? Oh wait, my mistake...


----------



## Urlyin (Aug 9, 2012)

HumanSmoke said:


> That because video games and tech demonstrators always represent reality? Let me guess, you send a weekly email to Unigine to berate them about the the impossibility of floating islands.
> Just for the record- the aircraft seems modelled on the Granville Brothers Gee Bee Model Z which first flew in 1931
> http://www.passion-aviation.qc.ca/images/musflight08/geebee.jpg
> 
> ...



Even more off topic is that AMD still makes cards for PACS system but is under the BARCO brand....


----------



## Steevo (Aug 9, 2012)

This thread is still going? Its like an elevator that has reached the bottom floor, but it doesn't stop......



Will all the people who own one of these cards for actual use please line up?


----------



## Casecutter (Aug 9, 2012)

HumanSmoke said:


> Yup. Nvidia must be gutted.
> 
> Yup, and a top fuel dragster can cover a quarter mile ...
> AMD is playing catch-up from a long way behind- and the actual hardware is the least of the issues...
> ...



Never do I want to *not* to see competition, or one company continually be able to oppress the another...

And yes, for AMD to gain any market share it doesn't need to deliver the utmost performance it just needs to offer a stable software/driver optimization for a $/perf.   The one thing they can't pass over is sustaining and vetting business workstation clients software against their product.  That’s what the purchaser is expecting for such extortionate prices, they want no hick-ups and white glove support. AMD has to provide that or get out!


----------



## Benetanegia (Aug 9, 2012)

Xzibit said:


> Oh snap, Benchmarks that dont take Fp32 or Fp64 SP, DP into account
> 
> http://www.tomshardware.com/reviews/geforce-gtx-690-benchmark,3193-12.html
> 
> ...



Aaand you keep on insisting on compute benchmarks for cards that are NOT for compute. Congratulatios your posts are the most useless ever made!!

It's funny how every benchmark I've shown is not valid according to you, but you post a Sandra bench (omg ) and Luxmark which is known to be non-optimized for Nvidia cards. And this is THE point that I've been pointing out all along, so very funny you try to dismiss it. Some compute benches are won by Nvidia *loosely* and some are won by AMD *loosely*. This does not indicate any advantage on the chip/architecture it only proves that optimization is much required and it's not there yet, which again only proves that GPU compute is a new technology.

I've posted 6+ benches that cover ALL the characteristics necessary for a Quadro card and you have shown NOTHING substantial. So congratulations.



Xzibit said:


> So this card @ $2,249 MSRP, USD is suppose to compete against AMD FirePro W7000 @ $899 MSRP, USD.
> 
> Talk about a price premium...
> 
> ...



The above statement is what we are discussing about. And my answer couldn't be easier to understand. It can compete because it's not the maximum theoretical Tflops the basis by which these type cards are bought. They are bought because they are very good cards for visualization (and you can see this in ALL of my posts as well as the one be Recus). If you can't understand this and you keep on insisting on useless arithmetic FP benchmarks, I'll have to think that you are deficient or something. I don't want to reach that point.


----------



## HumanSmoke (Aug 9, 2012)

Casecutter said:


> Never do I want to *not* to see competition, or one company continually be able to oppress the another...


I think that any right minded person would agree with that sentiment. Please don't see my post as "anti-AMD" or lump me in with the nonsensical "want to see AMD dead" trolls- in an ideal world we would have AMD, VIA and ARM challenging Intel, Nvidia and Intel challenging AMD, anyone challenging Microsoft. Unfortunately, what I want, and the reality of the situations rarely converge.

Something, something, shooting the messenger



Casecutter said:


> And yes, for AMD to gain any market share it doesn't need to deliver the utmost performance it just needs to offer a stable software/driver optimization for a $/perf.   The one thing they can't pass over is sustaining and vetting business workstation clients software against their product.  That’s what the purchaser is expecting for such extortionate prices, they want no hick-ups and white glove support. AMD has to provide that or get out!


Again, a truism. Nvidia's drive has been to market an ecosystem (CUDA, GPGPU, SDK's, Design software, pro drivers) almost entirely predicated upon the high dollar professional market, which usually lumbers the desktop variants with an abundance of overkill in unneeded features and a wasteful perf/watt, perf/$, perf/mm^2, and an overcomplicated solution (for desktop). AMD seem to be on the learning curve that Nvidia started navigating some time back. I've no doubt that AMD given the resources can develop the necessary hardware to challenge anything Nvidia can come up with- the problem is marketing, ongoing support, and most importantly, long range vision. As you say, AMD needs to start delivering- and I hope they do...but what I see presently is a cash-strapped AMD trying to cover a lot of bases with very few resources.


----------



## SIGSEGV (Aug 10, 2012)

Benetanegia said:


> Luxmark which is known to be non-optimized for Nvidia cards.



i want to ask you something, where do you know luxmark is known to be non-optimized for Nvidia cards? it just because nvidia lose on the luxmark bench then you have right to say like that or what? please give me a scientific explanations. thanks.

*afaik luxmark bench has debug infos


----------



## Casecutter (Aug 10, 2012)

HumanSmoke said:


> what I see presently is a cash-strapped AMD trying to cover a lot of bases with very few resources.


Yep, and that might be why here a while back AMD slowed the drive release from monthly, to as need.  I think Rory and those realize there’s a good return in the workstation business, and let hope that’s where a lot of those resources where refocused to… rather than unemployment lines.


----------



## Benetanegia (Aug 12, 2012)

SIGSEGV said:


> i want to ask you something, where do you know luxmark is known to be non-optimized for Nvidia cards? it just because nvidia lose on the luxmark bench then you have right to say like that or what? please give me a scientific explanations. thanks.
> 
> *afaik luxmark bench has debug infos



I was too busy responding to stupid posts of you know who and didn't see your post, sorry.

Well it's a known fact for 3d artists and people related to rendering world etc. The most notable source was a Nvidia engineer himself, in Nvidia forums who simply admitted that they don't optimize for Luxmark, because it's not a priority. Keep in mind it's only one of the dozens of renderers, although it's in the spotlight now because AMD uses it on their internal benches and that made some sites start using it too (probably AMD suggests its inclusion in their reviewing guides).

So other than that what can I do? What about some benches from Anandtech which show the matter at hand:

HD6970 review:






GTX560 Ti review:






HD7970 review:






GTX680 review:






As you can see there's always progression on AMD cards. i.e HD6970 8370 -> 10510 -> 11900 -> 13500

This is normal behavior as both the application and AMD drivers are optimized.

On Nvidia on the other hand performance is erratic at best. i.e GTX570 6104 -> 15440 -> 10500 -> 9800

A massive improvement in one particular driver and then down and then even lower again. This does not happen when something is optimized. Well actually, you can see a hint of real Nvidia potential in the GTX560 review. That is how much optimization or lack thereof can affect results in a relatively new and unoptimized benchmark on a relatively new and unoptimized API.


----------



## Xzibit (Aug 12, 2012)

Benetanegia said:


> A massive improvement in one particular driver and then down and then even lower again. This does not happen when something is optimized. Well actually, you can see a hint of real Nvidia potential in the GTX560 review. That is how much optimization or lack thereof can affect results in a relatively new and unoptimized benchmark on a relatively new and unoptimized API.



You might want to check the benchmark discription first aswell, *HDR* and then *HDR+Vol* before citing the GTX 560 potential 

Maybe you should post CUDA benchmarks to see if there own API shows a discrepancy in "optimization" and gauge that ?

There is several out there.  Here is one
A few sites and blog & forums have done comparative test aswell for CUDA vs OpenCL and the % difference is minimal. Single digit % differance. Search away... It kind of throws a wrench in that whole "optimization" theory tho.  

You also might want to finish the quote from the Nvidia engineer..  It kind of paints the entire picture of the GK104.


----------



## Benetanegia (Aug 12, 2012)

Xzibit said:


> You might want to check the benchmark discription first aswell, *HDR* and then *HDR+Vol* before citing the GTX 560 potential
> 
> Maybe you should post CUDA benchmarks to see if there own API shows a discrepancy in "optimization" and gauge that ?
> 
> ...



Wow, you are clueless. Do you even know what OpenCL or CUDA really are? They are APIs, programming frameworks. OpenCL is not different in that regard to OpenGL, DirectX and the likes. They are not applications, they are the tools to create programs. You still have to create your programs and optimize every single one of them.

What you did is like posting a Battlefield 3 benchmark and pretend it's relevant to Crysis 2 optimization level and performance, because both have been developed using DirectX environment.  Next time, you might as well compare any 2 applications because they have been created with C++ or because they used the Windows SDK. 

But it's even worse, because you coul have at least posted some application benchmarks, but no, you post a link to some matrix*matrix multiplication, and fourier transformations, etc and you pretend it demostrates anything regarding LuxRender optimization. You're so clueless.  It's like posting a video of some little boy doing simple arithmetic additions and pretend he must be a genius capable of doing Lorentz transformations.


----------



## Xzibit (Aug 12, 2012)

You are funny how you somehow insert some off the wall implications that you have in your head.

Its like I dont have to say anything and you insert what ever you want into the conversation.  Its halarious.

Do you still have your imaginary friend with you ?


----------



## Benetanegia (Aug 12, 2012)

Xzibit said:


> You are funny how you somehow insert some off the wall implications that you have in your head.
> 
> Its like I dont have to say anything and you insert what ever you want into the conversation.  Its halarious.
> 
> Do you still have your imaginary friend with you ?



Now you are embarrasing yourself. You can't tell the difference between a Tesla and a Quadro, between the pro and consumer market and now you don't even know what an API is, yet you continue on arguing about things that are waaaay above your knowledge, posting irrelevant stupid things.

And finally since you can't fight the message, mainly because you have no freaking idea of what we are talking about now, you start attacking the messenger. And doing it desperately and hopelessly. Lame. Laughable but lame.

EDIT: I want some more laughs so please try to explain how your previous post is relevant to SmallLux.
EDIT2: Stop embarrasing yourself and post something relevant. I know that you can't understand what an API is, and now I know that you don't even know what Optimization is. A little tip that should help you out through this conversation: OpenCL is NOT SmallLux/Luxmark and Luxmark is NOT OpenCL. No other benchmark/application other than SmallLux/Luxmark can prove SmallLux optimization. That is what my examples were trying to explain to you, I had hopes that you would at least be able to understand that 2 games are not the same thing simply because they use the same API*, but no luck. You are so clueless you can't even understand that, so what's next?

*ups sorry I know you need help, that API that I'm talkiing about there is DirectX, both use DirectX.


----------



## Xzibit (Aug 12, 2012)

I love how you accuse me of all this stuff when you cant even read your own bechmarks you post with speculation of improvements on driver optimization and dont know they are differant settings.

Keep going little kid.

I like your harry potter imagination.  What else am i going to say. 


I swear he has a mental disorder that he reads a post and it kicks in and something happens in his head that he changes it to mean what ever he wants it to say and then goes on a nerd rage.  He needs help.
Lets all pray for him...


----------



## Frick (Aug 12, 2012)

Benetanegia said:


> compute benches are won by Nvidia *loosely* and some are won by AMD *loosely*. This does not indicate any advantage on the chip/architecture it only proves that optimization is much required and it's not there yet, which again only proves that GPU compute is a new technology.



And I assume these kind of cards are used for specific purposes so you can't just say "x is better than y, period". Like with most hardware really.


----------



## Benetanegia (Aug 12, 2012)

Frick said:


> And I assume these kind of cards are used for specific purposes so you can't just say "x is better than y, period". Like with most hardware really.



Exactly, for example they asked for GK104 based Teslas when that was not in Nvidia's mind.

And professionals buy solutions in reality, not cards. That's why CUDA is so widely use. It works flawlessly (for the most part) and is updated all the time, so it's one or two steps ahead of OpenCL. The restriction to Nvidia cards is not a problem in the professional world, because professionals buy complete solutions (hardware +software) from integrators and the upgrade cycle is much slower.

And yes, once they get the solution, hardware + software (in this case API), they will build the application that best suits their needs and optimize for their hardware. A common mistake is to think that OpenCL code can be ported from one hardware architecture to another without any changes. The API is the same, so the code can essentially be ported, but the much needed optimizations cannot be ported. It's the exact same as a game port from consoles to PC, of course you can simply port most of the code, and it will work, but performance will suck.

@ Xzbit: The only one who does not know what he talks about is you. Again, explain how your link is related to my post. You don't even know, that's why you don't say anything, besides personal attacks that you think will put the audience by your side. Don't you understand no one cares about you? Stop doing the show, no one's listening. And if you really want to demostrate anything, then demostrate it. Post something substantial, but first learn how graphics work, learn about the APIs involved, learn about optimization and how it mus be done in hardware level (i.e GTX480 -> GTX580 or HD5870 -> HD6970), driver level (new driver) and application level (i.e game patch). Learn about the different cards meant for different markets. Learn, in resume, everything and then try to put up a post that doesn't embarrass yourself and the TPU community.

*in red examples that I hope help you understand at least a little bit and helps you out of that dark place you're into.



Xzibit said:


> Sorry i'm still getting a kick out of your interpretation of my post.
> 
> Its just amusing and sad at the same time.



No, clown, no one is interpreting your post. It's very clear. You don't know what OpenCL is and that's why you post an unrelated link to a random CUDA(!) MxM inplementation wanting to prove that SmallLux is optmized for Nvidia cards. You think there's a relation between completely unrelated apps and API(!), because you don't even know what each of those things are, which is beyond laughable.

For the nth time, driver optimization for OpenCL does NOT equate to SmallLux optimization. The AESEncrypt bench that I posted earlier IS an OpenCL bench and GTX580 wins hands down, GTX680 is second. So if I was stupid enough, I could say it's because Fermi is God. But I don't. Why? Is anyones guess? Surely I don't say that because a single benchmark means squat, only thing sure is that it's better optimized for that app. Just like Nvidia cards were better optimized in the second SmallLux graph.


----------



## Xzibit (Aug 12, 2012)

Benetanegia said:


> @ Xzbit: The only one who does not know what he talks about is you. Again, explain how your link is related to my post. You don't even know, that's why you don't say anything, besides personal attacks that you think will put the audience by your side. Don't you understand no one cares about you? Stop doing the show, no one's listening. And if you really want to demostrate anything, then demostrate it. Post something substantial, but first learn how graphics work, learn about the APIs involved, learn about optimization and how it mus be done in hardware level (i.e GTX480 -> GTX580 or HD5870 -> HD6970), driver level (new driver) and application level (i.e game patch). Learn about the different cards meant for different markets. Learn, in resume, everything and then try to put up a post that doesn't embarrass yourself and the TPU community.



Sorry i'm still getting a kick out of your interpretation of my post.

Its just amusing and sad at the same time.


----------



## Benetanegia (Aug 12, 2012)

You need things simple so you probably have problems understanding the above, so here you go as simple as posible:

http://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680/14












> For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file.



Just so you believe it's OpenCL accelerated.

What Anandtech says about SmallLuxGPU:



> Being an OpenCL title that NVIDIA isn’t taking any care to optimize for, the 7970GE simply blows the GTX 680 out of the water. It’s not even a contest here.



So 2 benches using OpenCL, same hardware, same drivers. One clearly won by AMD, one clearly won by Nvidia. It surely speaks volumes about which card is faster accelerating OpenCL apps. Right? Or...


----------



## Xzibit (Aug 12, 2012)

Benetanegia said:


> No, clown, no one is interpreting your post. It's very clear. You don't know what OpenCL is and that's why you post an unrelated link to a random CUDA(!) MxM inplementation wanting to prove that SmallLux is optmized for Nvidia cards. You think there's a relation between completely unrelated apps and API(!), because you don't even know what each of those things are, which is beyond laughable.
> 
> For the nth time, driver optimization for OpenCL does NOT equate to SmallLux optimization. The AESEncrypt bench that I posted earlier IS an OpenCL bench and GTX580 wins hands down, GTX680 is second. So if I was stupid enough, I could say it's because Fermi is God. But I don't. Why? Is anyones guess? Surely I don't say that because a single benchmark means squat, only thing sure is that it's better optimized for that app. Just like Nvidia cards were better optimized in the second SmallLux graph.



Thats why its amusing... Your making a correllation. Inserting your preconceived notion of the conversation you wanted to conclude 

Why its sad... Your continuing it at your free will.  Something definately has to be wrong with you since you picked it up and ran with it the way you are and are continue to do so.


----------



## TheoneandonlyMrK (Aug 12, 2012)

Benetanegia said:


> So 2 benches using OpenCL, same hardware, same drivers. One clearly won by AMD, one clearly won by Nvidia. It surely speaks volumes about which card is faster accelerating OpenCL apps. Right? Or...



what i would say is that nvidia used to spout about their cards performance in scientific simulations, rendering, oil and gas etc, since kepler came out with its poor double precision performance the scientific simulations been dropped from their PR bumph, a point not lost on me, neither is the fact it takes them two 680 Gpus to make a K10 kesla card that beats its prior gen single precision performance, dosnt sound like an epic design win to me, this is just my oppinion beni, dont get bent up on it.


----------



## Benetanegia (Aug 12, 2012)

Xzibit said:


> Thats why its amusing... Your making a correllation. Inserting your preconceived notion of the conversation you wanted to conclude
> 
> Why its sad... Your continuing it at your free will.  Something definately has to be wrong with you since you picked it up and ran with it the way you are and are continue to do so.



Oh poor troll. I make no correlation. If I post something and you quote it saying it's wrong and you link something. There's the correlation. 1 more thing in your have to learn list. You have a lot of caching up to do.

Now, if instead of these stupid posts, you posted something or explained your point since debating my point is apparently not your point (nice way of trolling and going offtopic btw), you wouldn't look like an idiot.



theoneandonlymrk said:


> what i would say is that nvidia used to spout about their cards performance in scientific simulations, rendering, oil and gas etc, since kepler came out with its poor double precision performance the scientific simulations been dropped from their PR bumph, a point not lost on me, neither is the fact it takes them two 680 Gpus to make a K10 kesla card that beats its prior gen single precision performance, dosnt sound like an epic design win to me, this is just my oppinion beni, dont get bent up on it.



BS. All of it. In single precision K10 destroys the previous gen badly, on same TDP, reason for which they included 2 GPUs, because they can. DP it was never meant to be. At least keep it realistic man.

And GK104, K10 and it's compute capabilities (SP) have been marketed to hell and back. This thread being only one of the many.

Random example from a 5 sec google search:
http://www.theregister.co.uk/2012/06/18/nvidia_isc_tesla_k10_performance/

Prefer TPU? Have it:

http://www.techpowerup.com/167862/N...nce-Milestones-For-Scientific-Simulation.html


----------



## Steevo (Aug 13, 2012)

http://ambermd.org/gpus/

Only Nvidia GPU's.........so they are saying, hay, on our track that we won't let AND on we beat CPU's, so, ya know, good for us........ press release.


Press release is synonymous with thrown monkey excrement.

Oh, and notice all the things NOT supported on the GPU. So, in a half assed sort of way, some of part of the code might run on the GPU, but not all.




This is like when we found changing the card ID to Nvidia improved frame rates in certain games and stuff, or like when there were alot of missing textures that reapperared with a .exe name change. Not that Nvidia has ever done anything questionable to make themselves look better, or any other company for that matter.


----------



## Benetanegia (Aug 13, 2012)

Steevo said:


> http://ambermd.org/gpus/
> 
> Only Nvidia GPU's.........so they are saying, hay, on our track that we won't let AND on we beat CPU's, so, ya know, good for us........ press release.
> 
> ...



He said Nvidia avoided promoting Kepler as a compute card, for scientific simulations and such. It's false, Nvidia HAS promoted it, more than ever in fact. 

I provided the first link that showed up searching for Kepler+compute or something like that, whatever I wrote in a 5 sec search. I don't care about what AMBER is, but just following the link you can see at first glance that it's CUDA, so obviously no AMD GPUs. It's irrelevant, professionals want solutions and in this case CUDA+Tesla is what they chose.

EDIT: Tech forums are very funny btw. Does anyone get upset when a car manufacturer uses a certain product from a certain brand? Like "OMFG they use Goodyear tyres instead of Michelin, they are sooo biased and bla bla." I've never heard such a thing, in the same context as people complain about Nvidia/AMD/Intel/... and it would be weird and stupid. but on computers you hear that everyday. Company A uses product of company B, end of story.


----------



## TheMailMan78 (Aug 13, 2012)

Benetanegia said:


> EDIT: Tech forums are very funny btw. Does anyone get upset when a car manufacturer uses a certain product from a certain brand? Like "OMFG they use Goodyear tyres instead of Michelin, they are sooo biased and bla bla." I've never heard such a thing, in the same context as people complain about Nvidia/AMD/Intel/... and it would be weird and stupid. but on computers you hear that everyday. Company A uses product of company B, end of story.



Never go to a Mustang forum.


----------



## Steevo (Aug 13, 2012)

TheMailMan78 said:


> Never go to a Mustang forum.


Only to
Trolololololololololololololol and then get banned.


----------



## aglick (Aug 13, 2012)

*new FirePro W8000/W9000 review*

http://www.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265.html

Not too shabby...


----------



## jtech (Aug 27, 2012)

*My toy is better than yours*

I am amazed at how snarky and cynical the comments on this site have become. Half the time users dont even read the whole article before posting something that inevitably translates into, my genitals are bigger than yours. Once upon a time comments would provide real-world input on a topic. Now its teenyboppers drinking Mt.Dew posting during breaks on Skyrim or CoD. As a longtime engineer, Im growing tired of comments riddled with the misplaced aggression from kids that got beatup on the playground too much... but i digress.


----------



## TheMailMan78 (Aug 27, 2012)

jtech said:


> I am amazed at how snarky and cynical the comments on this site have become. Half the time users dont even read the whole article before posting something that inevitably translates into, my genitals are bigger than yours. Once upon a time comments would provide real-world input on a topic. Now its teenyboppers drinking Mt.Dew posting during breaks on Skyrim or CoD. As a longtime engineer, Im growing tired of comments riddled with the misplaced aggression from kids that got beatup on the playground too much... but i digress.



Ah I guess you're new to the Internet. Welcome!


----------



## jtech (Aug 27, 2012)

TheMailMan78 said:


> Ah I guess you're new to the Internet. Welcome!



Ha Your Mustang quote was right on point man! At least read the effing article before commenting....


----------

