Wednesday, January 28th 2009
NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research
NVIDIA Corporation today announced that Bill Dally, the chairman of Stanford University's computer science department, will join the company as Chief Scientist and Vice President of NVIDIA Research. The company also announced that longtime Chief Scientist David Kirk has been appointed "NVIDIA Fellow."
"I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company," said Jen-Hsun Huang, president and CEO, NVIDIA. "His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world's founding visionaries in parallel computing, he shares our passion for the GPU's evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company.""I would also like to congratulate David Kirk for the enormous impact he has had at NVIDIA. David has worn many hats over the years - from product architecture to chief evangelist. His technical and strategic insight has helped us enable an entire new world of visual computing. We will all continue to benefit from his valuable contributions."
About Bill Dally
At Stanford University, Dally has been a Professor of Computer Science since 1997 and Chairman of the Computer Science Department since 2005. Dally and his team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing chip which pioneered "wormhole" routing and virtual-channel flow control. His group at MIT built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. He is a cofounder of Velio Communications and Stream Processors, Inc. Dally is a Fellow of the American Academy of Arts & Sciences. He is also a Fellow of the IEEE and the ACM and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks.
About David Kirk
David Kirk has been with NVIDIA since January 1997. His contribution includes leading NVIDIA graphics technology development for today's most popular consumer entertainment platforms. In 2006, Dr. Kirk was elected to the National Academy of Engineering (NAE) for his role in bringing high-performance graphics to personal computers. Election to the NAE is among the highest professional distinctions awarded in engineering. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-performance computer graphics systems to the mass market. From 1993 to 1996, Dr. Kirk was Chief Scientist, Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. Dr. Kirk is the inventor of 50 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology.
Source:
NVIDIA
"I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company," said Jen-Hsun Huang, president and CEO, NVIDIA. "His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world's founding visionaries in parallel computing, he shares our passion for the GPU's evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company.""I would also like to congratulate David Kirk for the enormous impact he has had at NVIDIA. David has worn many hats over the years - from product architecture to chief evangelist. His technical and strategic insight has helped us enable an entire new world of visual computing. We will all continue to benefit from his valuable contributions."
About Bill Dally
At Stanford University, Dally has been a Professor of Computer Science since 1997 and Chairman of the Computer Science Department since 2005. Dally and his team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing chip which pioneered "wormhole" routing and virtual-channel flow control. His group at MIT built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. He is a cofounder of Velio Communications and Stream Processors, Inc. Dally is a Fellow of the American Academy of Arts & Sciences. He is also a Fellow of the IEEE and the ACM and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks.
About David Kirk
David Kirk has been with NVIDIA since January 1997. His contribution includes leading NVIDIA graphics technology development for today's most popular consumer entertainment platforms. In 2006, Dr. Kirk was elected to the National Academy of Engineering (NAE) for his role in bringing high-performance graphics to personal computers. Election to the NAE is among the highest professional distinctions awarded in engineering. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-performance computer graphics systems to the mass market. From 1993 to 1996, Dr. Kirk was Chief Scientist, Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. Dr. Kirk is the inventor of 50 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology.
44 Comments on NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research
How do you know I'm not qualified? Science is only as good as the methods through which it is conducted. Stanford has lost all respected I had for them when they changed their score system to focus on results and not accuracy.
The GPU is already overburdened with graphics processing and then they add insult to injury by adding physics processing on top of it. That's practically an infinite loop. Things get more complex with no real results. Having a separate card for physics does make some sense but, considering extra physics usually means more objects to render on screen which ultimately comes back to the GPU not being fast enough. I get what you're saying about fixing the priority issues but ultimately, it just means more and/or bigger GPUs (or other high FLOP chips) because they want to increase the workload.
Yeah, I think NVIDIA is in serious trouble. The more I think about it, the more likely it is for Intel and AMD to put monster FPUs in their chips completely removing the necessity for a GPU.
And yes, I have faith in that guy, becasue it's an eminence it his field. I knew of him some time ago and it's apparently a pionner in most modern and successful massive parallel computing algorithms and systems. He was long before any GPGPU initiative was started and I guess it is his own team at Standford who first thought of implementing it. In this case, a team, expert in parallel computing and architects of many supercomputer architectures, decided the GPU was a good option for parallel computing. So yes, I believe in GPGPU and I have faith in the guy. GPU overburdened? Fastest cards right now are nothing but OVERKILL and that in games and applications that are far from being well implemented to the new hardware.
GPUs have always been more than rendering machines in my heart, are gaming machines, and gaming comprises graphics, sound, physx, AI, story and gameplay. In my book all those are necessary and share the same importance. Today the physx department is lacking badly, very badly, with no real improvements since late 90's. When CPUs can't handle them abd the GPU is heading to a place where it can fix that, I'm very happy, I don't care if that improvement comes at the expense of huge graphics advancements. I don't need more than 50 sustained fps, I don't need more than 1900x1200 pixels, I don't need more than 4x anti-aliasing, I don't need 2 million poly characters, nor 20 Mpixel textures, I need lifelike games and that can only come with far greater physics, the ones that only PhysX is addressng right now and for a long time, and is teaching developers how to use them in GPGPU code, so that when DX11 and OpenCL arrives they know what to do. GPUs are going to become more powerful with each generation, that's a given fact, and that power needs to be used in an intelligent way, not increasing resolution, AA and FPS beyond what the human eye can discern while playing.
Almost all 3D software that uses GPUs for their original purpose run the card at 100%. This is why we measure their performance in FPS. If a card can't maintain 30fps or more at any given load, it is being overburdened.
Oh and just one clarification: GPUs don't really run at 100% all the time, even if it's at 100% load according to any program. You know that the computing part of a GPU consist of SPs, TMUs and ROPs and the graphics pipeline needs all of them. So, if all the ROPs are busy even if 50% of SPs or TMUs are free the card is at 100% load, same with the other two parts. If they make the GPUs in a way that those SPs and TMUs can be easily accessed without requiring to go trough the graphics pipeline we have a clear winner in Graphics+GPGPU applications. TBH I'm not very sure, but GT200 IS somehow free in that sense, one step closer in that department than any other GPU and GT300 with its MIMD core will surely be almost free. Ati will surely have something, though maybe not because it's owned by AMD (maybe the reason that Standford and F@H moved from focusing on Ati to Nvidia? The timeline coincides...).
The wheel is rolling and no one will stop it, neither Intel can ATM IMO. They'll have to coexist.
Basically what you are saying is to mandate vsync or some other limiter (which are near unanimously buggy in games and come with a pretty significant performance penalty) in order to run the card at less than 100%. Doing so frees up some clock cycles which could be used for something else. But, why sacrifice GPU performance when most CPUs run well under 100% while gaming as is? Why not continuing using Havok-based CPU physics? Why bother with stealing GPU clocks for physics?
As for the other issue, no, I'm not saying that by any means. I'm saying that probably GPUs are NOT at 100% load nowadays, in the sense of using all of it's ALUs, specially the ones in SPs and when high resolution AA is used. If there's a way of reaching them without going through the graphics pipeline (and the more I remember this article, the more I think it already doesn't in GT200) performance wouldn't be hurt at all, only because of the added details that need to be rendered, but that's the point of better graphics in my book, increasing details.
CPU based physics (Havok) haven't improved in almost 10 years and will never do, at least until Intel can do them better than anyone else. Still GPUs (guided by the CPU) are far better suited for physics calculations and have 20X the neat power. GPU physics >>>> CPU physics always. You have clearly stated you don't need or want better physics, but many do, and many who don't is because they don't understand what faster physics means or have never seen a good example of massive physics in action.
EDIT: For more examples of why we should "steal" GPU clocks for other things than graphics, take COD4 or Bioshock (UT3, HL2, L4D), they still have good graphics and what's the difference of using a GF 7900/X1900 card or a GTX295 to play it? NONE really, resolution and AA levels, but nothing else, as all of them can have the details at MAX and play smooth already. Lower the details a bit, while the difference is not huge still, and you can play them even on a 7600GT. We are talking about cards with a power difference of 8x to 12x and that doesn't really make the games better. Something is wrong there...
Second off Ford, if you don't want to fold, don't. Nobody cares if you have a personal vendetta versus it or nVidia. And if they do you can discuss it in a different post. I personally don't spend every waking minute gaming. Sometimes I surf the internet (such as now). Why not have my graphics card folding when I'm doing this? FPS means nothing while in Firefox. Turn it off when you game, that's what I do. I'm still around #15 for the top producers of TPU doing that. If people want to do it, let them. The mafioso isn't about to storm into your house and break your knee caps because I'm folding.
Oh, and "The IBM T221-DG5 was discontinued in June 2005." source You can keep gaming on a resolution that's been discontinued. Maybe in the future it will come back when the graphics cards can support it, but until then I guess my 1280x1024 CRT will have to do me good. Not sure how I will survive though, I mean it is gaming after all, and games are serious business.
Edit: To clarify before I get some fellow folders breathing down my neck top 15 in PPD, not overall rankings.
Back to it though:
I do hope that him being brought in by the green team is because he has more to add. If not then its simply a marketing endeavor. Put their name out there and sparked some debate now didn't it?
Ford that won't hurt graphics, stay calm, because AFAIK Kirk wasn't really too focused on development lately anyway.
Scientific-grade physics calculations really only have a home in simulation games which have been wanning in popularity over the years. Higher frame rate which means next to no hiccups. I can't name a single game in recent times that has zero hiccups but if I go back and play the oldies like Mafia, they run smooth as butter. I think some people are more sensitive to those hiccups than others. I for one, can't stand them. I'd rather it look like crap and play without hiccups than look brilliant and get them all the time.
I think you underestimate how much power it takes to get dozens of textures on screen with all the real-time rendering that's taking place. Real time ray-tracing is the direction NVIDIA needs to be going, not general processing. Why doesn't NVIDIA buddy up with Intel and stick a GT200 core on an Intel chip? Would that not be more useful? It cost like $2,000 USD so not many were willing to buy it. I believe higher DPI is the direction the industry will go when the costs of producing high DPI monitors comes down; however, it also requires an exponential increase in graphics capabilities too. One can't thrive without the other.
There is no demand for stream processing in mainstream computers. IBM would probably love the technology but because NVIDIA is too tight-lipped on everything, they'll just keep on building 100,000+ processor super computers. The benefit of the 100,000+ processor approach is they aren't only high in terms of FLOPs, they're also very high at arithmatic operations per second as well.
The Sims was put off on developement for over a decade because there wasn't enough processing power. Spore was put off by at least two decades for the same reason. There's lots of ideas out there for games that haven't been created because there still isn't enough power in computers. The next revolution I'm looking for is text-to-voice algorithms. Just like GPUs, that will probably require its own processor.
NVIDIA has three main "card" lines: GeForce (GPU-Direct3D-Games), Quaddro (GPU-OpenGL-CAD), and recently, Tesla (GPGPU-CUDA-supercomputing). NVIDIA hired Bill Dally whom's expertice is in the Tesla department. NVIDIA selected what path they wish to pursue research in and it isn't gaming or CAD. This is why I am disappointed. I love my games and NVIDIA is shifting focus away from gaming. I have no use for a GPGPU.
"Yes, Regis, that is my final answer."
Physics, it's not about anything you said (jumping and all lol), no it's about fully destructible environments, that you can make a hole in a door and shoot from there, break one brick in a wall and same, etc. That the smoke is displaced by enemies and weapons/explosions, so no longer an enemy inside smoke is "invisible" or you can dissipate it by throwing a grenade, or the smoke is dissipated and moved by actual wind so people smart enough can take advantage of that in BF2-like games, etc. All that can't be made in CPUs, you can in a GPU along with the graphics.
More, 60fps is all you need, current high end plays all games at 100+ fps, except Crysis that is more botlenecked by the CPU than the GPU (this is fact, I have seen it in my house 8800GT+Q6600@2.4Ghz >>>>> A64X2@2.8Ghz+9800GTX+, you do need GPU too don't get me wrong). And even Crysis runs at good framerates if you don't enable AA. So what will come for the next-gen? And the next? I know that I don't want the power of next-gen cards go to waste because the only thing they provide is 3000x2000 8XAA @ 100+ fps. I want more details in te game, and Physics is details, the ones that I want, and you know I'm happey with the decisions Nvidia is making because they seem to be trying to give me what I want.
Imagine if your monitor was like a membrane that can be manipulated by inputs. A bump map, as it were where 0 = infinite, 128 = neutral, 1 is fartherest away, and 255 is closest. The actual programing alters specific points on the membrane that cause it move forward or away. As such, it could not only produce true to life cylinders and curves, it also wouldn't have any problem with destroyable environments because it doesn't define anything that actually needs to fragment.
The only practical way to do this is how it was demonstrated in Red Faction: make predetermined chunks of breakable material--a scripted sequence. Particles are a completely different problem unto themselves. The most frequently used solution is through sprites. Even sprites can overload older computers due to layering of images and the like. The problem with particles is again, sheer numbers (and therefore, mega memory demands). Physics do play a relative significant part here especially when it comes down to interacting with the particles; however, the physics would have to be greatly simplified just to be able to update all the particles.
Thinking seriously on it, I can't think of anything capable of producing realistic 3D particle flow in realtime. Even a super computer can't pull it off without a great deal of time (like in movies). For the time being, sprites are the way to go.
If you have seen the Kthulu PhysX demo there's something called deformable meshes and you don't need to split the polys the way you said. I wasn't talking about indentation yet though, but actual bricks, columns and all. Wood could be like cloth, but with different properties, for example.
You are in the past man, current technology and hardware can make all what I'm saying possible, not to mention the one still to come, the one that Bill Dally has been hired for.
And sprites?? Yeah definately you live in the past. Fluids. <- Pay attention to what happens to the barrel in the water, when the oxygen cylinder falls to the water after he shoots it. Fluids. Fluids.
I don't want to continue this off-topic. GPGPU and PhysX can be done for today's high-end cards easily and the guy has been hired to make it even easier. It's not a matter of "being able", it's a matter of when and "will the others let us do it?".
Oh and they not only move, every single particle interacts with each other, but it's difficult to see it in youtube video, in any video.