Wednesday, January 28th 2009

NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

NVIDIA Corporation today announced that Bill Dally, the chairman of Stanford University's computer science department, will join the company as Chief Scientist and Vice President of NVIDIA Research. The company also announced that longtime Chief Scientist David Kirk has been appointed "NVIDIA Fellow."

"I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company," said Jen-Hsun Huang, president and CEO, NVIDIA. "His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world's founding visionaries in parallel computing, he shares our passion for the GPU's evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company."
"I would also like to congratulate David Kirk for the enormous impact he has had at NVIDIA. David has worn many hats over the years - from product architecture to chief evangelist. His technical and strategic insight has helped us enable an entire new world of visual computing. We will all continue to benefit from his valuable contributions."

About Bill Dally
At Stanford University, Dally has been a Professor of Computer Science since 1997 and Chairman of the Computer Science Department since 2005. Dally and his team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing chip which pioneered "wormhole" routing and virtual-channel flow control. His group at MIT built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. He is a cofounder of Velio Communications and Stream Processors, Inc. Dally is a Fellow of the American Academy of Arts & Sciences. He is also a Fellow of the IEEE and the ACM and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks.

About David Kirk
David Kirk has been with NVIDIA since January 1997. His contribution includes leading NVIDIA graphics technology development for today's most popular consumer entertainment platforms. In 2006, Dr. Kirk was elected to the National Academy of Engineering (NAE) for his role in bringing high-performance graphics to personal computers. Election to the NAE is among the highest professional distinctions awarded in engineering. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-performance computer graphics systems to the mass market. From 1993 to 1996, Dr. Kirk was Chief Scientist, Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. Dr. Kirk is the inventor of 50 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology.
Source: NVIDIA
Add your own comment

44 Comments on NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

#26
FordGT90Concept
"I go fast!1!11!1!"
You clearly got a lot of faith in him. Time will tell if it is well-placed.


How do you know I'm not qualified? Science is only as good as the methods through which it is conducted. Stanford has lost all respected I had for them when they changed their score system to focus on results and not accuracy.


The GPU is already overburdened with graphics processing and then they add insult to injury by adding physics processing on top of it. That's practically an infinite loop. Things get more complex with no real results. Having a separate card for physics does make some sense but, considering extra physics usually means more objects to render on screen which ultimately comes back to the GPU not being fast enough. I get what you're saying about fixing the priority issues but ultimately, it just means more and/or bigger GPUs (or other high FLOP chips) because they want to increase the workload.


Yeah, I think NVIDIA is in serious trouble. The more I think about it, the more likely it is for Intel and AMD to put monster FPUs in their chips completely removing the necessity for a GPU.
Posted on Reply
#27
DarkMatter
FordGT90ConceptYou clearly got a lot of faith in him. Time will tell if it is well-placed.


How do you know I'm not qualified? Science is only as good as the methods through which it is conducted. Stanford has lost all respected I had for them when they changed their score system to focus on results and not accuracy.
Just because they changed the way points are given doesn't mean they have changed the focus to a less accurate method. It's completely unrelated. They changed the score system to atract more people and that's always a good thing. They continue doing the exact same thing.

And yes, I have faith in that guy, becasue it's an eminence it his field. I knew of him some time ago and it's apparently a pionner in most modern and successful massive parallel computing algorithms and systems. He was long before any GPGPU initiative was started and I guess it is his own team at Standford who first thought of implementing it. In this case, a team, expert in parallel computing and architects of many supercomputer architectures, decided the GPU was a good option for parallel computing. So yes, I believe in GPGPU and I have faith in the guy.
The GPU is already overburdened with graphics processing and then they add insult to injury by adding physics processing on top of it. That's practically an infinite loop. Things get more complex with no real results. Having a separate card for physics does make some sense but, considering extra physics usually means more objects to render on screen which ultimately comes back to the GPU not being fast enough. I get what you're saying about fixing the priority issues but ultimately, it just means more and/or bigger GPUs (or other high FLOP chips) because they want to increase the workload.


Yeah, I think NVIDIA is in serious trouble. The more I think about it, the more likely it is for Intel and AMD to put monster FPUs in their chips completely removing the necessity for a GPU.
GPU overburdened? Fastest cards right now are nothing but OVERKILL and that in games and applications that are far from being well implemented to the new hardware.

GPUs have always been more than rendering machines in my heart, are gaming machines, and gaming comprises graphics, sound, physx, AI, story and gameplay. In my book all those are necessary and share the same importance. Today the physx department is lacking badly, very badly, with no real improvements since late 90's. When CPUs can't handle them abd the GPU is heading to a place where it can fix that, I'm very happy, I don't care if that improvement comes at the expense of huge graphics advancements. I don't need more than 50 sustained fps, I don't need more than 1900x1200 pixels, I don't need more than 4x anti-aliasing, I don't need 2 million poly characters, nor 20 Mpixel textures, I need lifelike games and that can only come with far greater physics, the ones that only PhysX is addressng right now and for a long time, and is teaching developers how to use them in GPGPU code, so that when DX11 and OpenCL arrives they know what to do. GPUs are going to become more powerful with each generation, that's a given fact, and that power needs to be used in an intelligent way, not increasing resolution, AA and FPS beyond what the human eye can discern while playing.
Posted on Reply
#28
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterGPU overburdened? Fastest cards right now are nothing but OVERKILL and that in games and applications that are far from being well implemented to the new hardware.
Try playing a modern game on the IBM T220 at 3840×2400. Hell, one GTX 295 isn't enough, you'd have to have two and you'd still get pathetic framerates. Four might work...

Almost all 3D software that uses GPUs for their original purpose run the card at 100%. This is why we measure their performance in FPS. If a card can't maintain 30fps or more at any given load, it is being overburdened.
Posted on Reply
#29
DarkMatter
FordGT90ConceptTry playing a modern game on the IBM T220 at 3840×2400. Hell, one GTX 295 isn't enough, you'd have to have two and you'd still get pathetic framerates. Four might work...

Almost all 3D software that uses GPUs for their original purpose run the card at 100%. This is why we measure their performance in FPS. If a card can't maintain 30fps or more at any given load, it is being overburdened.
Ehh hello?? Who needs 3840x2400?

Oh and just one clarification: GPUs don't really run at 100% all the time, even if it's at 100% load according to any program. You know that the computing part of a GPU consist of SPs, TMUs and ROPs and the graphics pipeline needs all of them. So, if all the ROPs are busy even if 50% of SPs or TMUs are free the card is at 100% load, same with the other two parts. If they make the GPUs in a way that those SPs and TMUs can be easily accessed without requiring to go trough the graphics pipeline we have a clear winner in Graphics+GPGPU applications. TBH I'm not very sure, but GT200 IS somehow free in that sense, one step closer in that department than any other GPU and GT300 with its MIMD core will surely be almost free. Ati will surely have something, though maybe not because it's owned by AMD (maybe the reason that Standford and F@H moved from focusing on Ati to Nvidia? The timeline coincides...).

The wheel is rolling and no one will stop it, neither Intel can ATM IMO. They'll have to coexist.
Posted on Reply
#30
FordGT90Concept
"I go fast!1!11!1!"
Who doesn't? When the pixels are that small, AA makes no visible difference.


Basically what you are saying is to mandate vsync or some other limiter (which are near unanimously buggy in games and come with a pretty significant performance penalty) in order to run the card at less than 100%. Doing so frees up some clock cycles which could be used for something else. But, why sacrifice GPU performance when most CPUs run well under 100% while gaming as is? Why not continuing using Havok-based CPU physics? Why bother with stealing GPU clocks for physics?
Posted on Reply
#31
DarkMatter
FordGT90ConceptWho doesn't? When the pixels are that small, AA makes no visible difference.


Basically what you are saying is to mandate vsync or some other limiter (which are near unanimously buggy in games and come with a pretty significant performance penalty) in order to run the card at less than 100%. Doing so frees up some clock cycles which could be used for something else. But, why sacrifice GPU performance when most CPUs run well under 100% while gaming as is? Why not continuing using Havok-based CPU physics? Why bother with stealing GPU clocks for physics?
No one really needs much more than 1920x1200.

As for the other issue, no, I'm not saying that by any means. I'm saying that probably GPUs are NOT at 100% load nowadays, in the sense of using all of it's ALUs, specially the ones in SPs and when high resolution AA is used. If there's a way of reaching them without going through the graphics pipeline (and the more I remember this article, the more I think it already doesn't in GT200) performance wouldn't be hurt at all, only because of the added details that need to be rendered, but that's the point of better graphics in my book, increasing details.

CPU based physics (Havok) haven't improved in almost 10 years and will never do, at least until Intel can do them better than anyone else. Still GPUs (guided by the CPU) are far better suited for physics calculations and have 20X the neat power. GPU physics >>>> CPU physics always. You have clearly stated you don't need or want better physics, but many do, and many who don't is because they don't understand what faster physics means or have never seen a good example of massive physics in action.

EDIT: For more examples of why we should "steal" GPU clocks for other things than graphics, take COD4 or Bioshock (UT3, HL2, L4D), they still have good graphics and what's the difference of using a GF 7900/X1900 card or a GTX295 to play it? NONE really, resolution and AA levels, but nothing else, as all of them can have the details at MAX and play smooth already. Lower the details a bit, while the difference is not huge still, and you can play them even on a 7600GT. We are talking about cards with a power difference of 8x to 12x and that doesn't really make the games better. Something is wrong there...
Posted on Reply
#32
El Fiendo
First off, this a news post so stop crapping it up. This has turned from the legitimacy of folding to an argument on the power of GPUs (and the term GPGPU).

Second off Ford, if you don't want to fold, don't. Nobody cares if you have a personal vendetta versus it or nVidia. And if they do you can discuss it in a different post. I personally don't spend every waking minute gaming. Sometimes I surf the internet (such as now). Why not have my graphics card folding when I'm doing this? FPS means nothing while in Firefox. Turn it off when you game, that's what I do. I'm still around #15 for the top producers of TPU doing that. If people want to do it, let them. The mafioso isn't about to storm into your house and break your knee caps because I'm folding.

Oh, and "The IBM T221-DG5 was discontinued in June 2005." source You can keep gaming on a resolution that's been discontinued. Maybe in the future it will come back when the graphics cards can support it, but until then I guess my 1280x1024 CRT will have to do me good. Not sure how I will survive though, I mean it is gaming after all, and games are serious business.

Edit: To clarify before I get some fellow folders breathing down my neck top 15 in PPD, not overall rankings.
Posted on Reply
#33
DarkMatter
El FiendoFirst off, this a news post so stop crapping it up. This has turned from the legitimacy of folding to an argument on the power of GPUs (and the term GPGPU)
Yeah that's true, sorry because we went off-topic, even though I don't think we are crapping it up. I also think I've been pretty much on-topic in the sense that everything I have mentioned is probably the things Nvidia was after when they decided hiring him.
Posted on Reply
#34
El Fiendo
True for the most they were on topic, just sometimes a little skewed. Sorry didn't mean to play police (because I don't have the powers lol) I just didn't want to see it escalate, thread get locked, etc.

Back to it though:
I do hope that him being brought in by the green team is because he has more to add. If not then its simply a marketing endeavor. Put their name out there and sparked some debate now didn't it?
Posted on Reply
#35
DarkMatter
El FiendoTrue for the most they were on topic, just sometimes a little skewed. Sorry didn't mean to play police (because I don't have the powers lol) I just didn't want to see it escalate, thread get locked, etc.

Back to it though:
I do hope that him being brought in by the green team is because he has more to add. If not then its simply a marketing endeavor. Put their name out there and sparked some debate now didn't it?
I think there's still enough to do in stream computing and GPGPU, so that he will have a lot of work yet. Nvidia definately wants GPGPU to become mainstream so if he wants to do something they will let/help him.

Ford that won't hurt graphics, stay calm, because AFAIK Kirk wasn't really too focused on development lately anyway.
Posted on Reply
#36
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterI'm saying that probably GPUs are NOT at 100% load nowadays...
There's only two situations where they aren't: 2D and vsync (depending on how it is limited). This is why when you try playing a game with anything CUDA running, the frame rates get hideous but, at the same time, why messing around on your desktop isn't a problem.
DarkMatterCPU based physics (Havok) haven't improved in almost 10 years and will never do, at least until Intel can do them better than anyone else. Still GPUs (guided by the CPU) are far better suited for physics calculations and have 20X the neat power. GPU physics >>>> CPU physics always. You have clearly stated you don't need or want better physics, but many do, and many who don't is because they don't understand what faster physics means or have never seen a good example of massive physics in action.
Because physics, in terms of gaming, is all about getting a "passing grade." That is, when you do something like jump, does it react in a way that is believable? Or when you fire a bullet, does the weapon recoil as you expect and does the bullet behave as expected/wanted in terms of trajectory? I can't name one game that actually had bad pseudo-physics. Really, what do gamers gain by having scientific-grade physics calculations? Cartoon physics are half the fun in a lot of games. For instance, in Nightfire, the fluidity of the physics allowed gun play to be more like an elegant salsa dance rather than a gritty, slow movement that Quantum of Solace has. Most people that still play Nightfire are repulsed by Quantum of Solace's attempt to be realistic.

Scientific-grade physics calculations really only have a home in simulation games which have been wanning in popularity over the years.
DarkMatterFor more examples of why we should "steal" GPU clocks for other things than graphics, take COD4 or Bioshock (UT3, HL2, L4D), they still have good graphics and what's the difference of using a GF 7900/X1900 card or a GTX295 to play it? NONE really, resolution and AA levels, but nothing else, as all of them can have the details at MAX and play smooth already. Lower the details a bit, while the difference is not huge still, and you can play them even on a 7600GT. We are talking about cards with a power difference of 8x to 12x and that doesn't really make the games better. Something is wrong there...
Higher frame rate which means next to no hiccups. I can't name a single game in recent times that has zero hiccups but if I go back and play the oldies like Mafia, they run smooth as butter. I think some people are more sensitive to those hiccups than others. I for one, can't stand them. I'd rather it look like crap and play without hiccups than look brilliant and get them all the time.

I think you underestimate how much power it takes to get dozens of textures on screen with all the real-time rendering that's taking place. Real time ray-tracing is the direction NVIDIA needs to be going, not general processing. Why doesn't NVIDIA buddy up with Intel and stick a GT200 core on an Intel chip? Would that not be more useful?
El FiendoOh, and "The IBM T221-DG5 was discontinued in June 2005." source
It cost like $2,000 USD so not many were willing to buy it. I believe higher DPI is the direction the industry will go when the costs of producing high DPI monitors comes down; however, it also requires an exponential increase in graphics capabilities too. One can't thrive without the other.


There is no demand for stream processing in mainstream computers. IBM would probably love the technology but because NVIDIA is too tight-lipped on everything, they'll just keep on building 100,000+ processor super computers. The benefit of the 100,000+ processor approach is they aren't only high in terms of FLOPs, they're also very high at arithmatic operations per second as well.


The Sims was put off on developement for over a decade because there wasn't enough processing power. Spore was put off by at least two decades for the same reason. There's lots of ideas out there for games that haven't been created because there still isn't enough power in computers. The next revolution I'm looking for is text-to-voice algorithms. Just like GPUs, that will probably require its own processor.
Posted on Reply
#37
FordGT90Concept
"I go fast!1!11!1!"
Let me break it down real simple...

NVIDIA has three main "card" lines: GeForce (GPU-Direct3D-Games), Quaddro (GPU-OpenGL-CAD), and recently, Tesla (GPGPU-CUDA-supercomputing). NVIDIA hired Bill Dally whom's expertice is in the Tesla department. NVIDIA selected what path they wish to pursue research in and it isn't gaming or CAD. This is why I am disappointed. I love my games and NVIDIA is shifting focus away from gaming. I have no use for a GPGPU.


"Yes, Regis, that is my final answer."
Posted on Reply
#38
El Fiendo
Just because they made the guy Vice President doesn't mean they're going to drop their other lines like hot potatoes. He's not going to spell the end of gaming. I doubt you're an expert on the topic of how this technology works, so ever think that maybe his work with parallel computing might just equal greater gains in video games? And besides its not like they have only 2 guys doing research and one of them is solely GPGPU.
Posted on Reply
#39
DarkMatter
Ford, you don't listen, you don't read. It doesn't matter if the chip is 100% load, it has tons of free ALU always. Be it in the SPs, in the TMUs or in ROPs, but most of the times in the SPs. It says 100% because it can't run more threads because those depends to every unit in the pipeline, that is, one ROP, one TMU and one SP (at least), if all ROPs are being used 100%, if all TMUs are used 100%, if all SPs are used (very unlikely) 100%, but the card is NOT at 100% load. Make a way to access those free units and you have GPGPU + GPU with no prob. MIMD in GT300 will erase any overhead related to context change so GPU and CUDA will be able to run smoothly at the same time.

Physics, it's not about anything you said (jumping and all lol), no it's about fully destructible environments, that you can make a hole in a door and shoot from there, break one brick in a wall and same, etc. That the smoke is displaced by enemies and weapons/explosions, so no longer an enemy inside smoke is "invisible" or you can dissipate it by throwing a grenade, or the smoke is dissipated and moved by actual wind so people smart enough can take advantage of that in BF2-like games, etc. All that can't be made in CPUs, you can in a GPU along with the graphics.

More, 60fps is all you need, current high end plays all games at 100+ fps, except Crysis that is more botlenecked by the CPU than the GPU (this is fact, I have seen it in my house 8800GT+Q6600@2.4Ghz >>>>> A64X2@2.8Ghz+9800GTX+, you do need GPU too don't get me wrong). And even Crysis runs at good framerates if you don't enable AA. So what will come for the next-gen? And the next? I know that I don't want the power of next-gen cards go to waste because the only thing they provide is 3000x2000 8XAA @ 100+ fps. I want more details in te game, and Physics is details, the ones that I want, and you know I'm happey with the decisions Nvidia is making because they seem to be trying to give me what I want.
Posted on Reply
#40
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterPhysics, it's not about anything you said (jumping and all lol), no it's about fully destructible environments, that you can make a hole in a door and shoot from there, break one brick in a wall and same, etc.
The problem there isn't physics, it's graphic overload. Every time you strike a surface, it causes an indention, no? That surface was comprised of a single triangle (see attached image). It will fragment into no less than nine additional triangles. If you add another indention in another one of those triangles, it will add another 9 trinangles. So on and so fourth until there is nothing left to damage. This isn't a physics problem, it is a problem involving memory and a whole lot of calculations fragmenting the surface. Further complicating it is dealing with collisions with said surface. The way to address this issue is not with GPGPU, it's a new kind of processor altogether that doesn't operate on binary...

Imagine if your monitor was like a membrane that can be manipulated by inputs. A bump map, as it were where 0 = infinite, 128 = neutral, 1 is fartherest away, and 255 is closest. The actual programing alters specific points on the membrane that cause it move forward or away. As such, it could not only produce true to life cylinders and curves, it also wouldn't have any problem with destroyable environments because it doesn't define anything that actually needs to fragment.

The only practical way to do this is how it was demonstrated in Red Faction: make predetermined chunks of breakable material--a scripted sequence.
DarkMatterThat the smoke is displaced by enemies and weapons/explosions, so no longer an enemy inside smoke is "invisible" or you can dissipate it by throwing a grenade, or the smoke is dissipated and moved by actual wind so people smart enough can take advantage of that in BF2-like games, etc. All that can't be made in CPUs, you can in a GPU along with the graphics.
Particles are a completely different problem unto themselves. The most frequently used solution is through sprites. Even sprites can overload older computers due to layering of images and the like. The problem with particles is again, sheer numbers (and therefore, mega memory demands). Physics do play a relative significant part here especially when it comes down to interacting with the particles; however, the physics would have to be greatly simplified just to be able to update all the particles.

Thinking seriously on it, I can't think of anything capable of producing realistic 3D particle flow in realtime. Even a super computer can't pull it off without a great deal of time (like in movies). For the time being, sprites are the way to go.
Posted on Reply
#41
DarkMatter
FordGT90ConceptThe problem there isn't physics, it's graphic overload. Every time you strike a surface, it causes an indention, no? That surface was comprised of a single triangle (see attached image). It will fragment into no less than nine additional triangles. If you add another indention in another one of those triangles, it will add another 9 trinangles. So on and so fourth until there is nothing left to damage. This isn't a physics problem, it is a problem involving memory and a whole lot of calculations fragmenting the surface. Further complicating it is dealing with collisions with said surface. The way to address this issue is not with GPGPU, it's a new kind of processor altogether that doesn't operate on binary...

Imagine if your monitor was like a membrane that can be manipulated by inputs. A bump map, as it were where 0 = infinite, 128 = neutral, 1 is fartherest away, and 255 is closest. The actual programing alters specific points on the membrane that cause it move forward or away. As such, it could not only produce true to life cylinders and curves, it also wouldn't have any problem with destroyable environments because it doesn't define anything that actually needs to fragment.

The only practical way to do this is how it was demonstrated in Red Faction: make predetermined chunks of breakable material--a scripted sequence.



Particles are a completely different problem unto themselves. The most frequently used solution is through sprites. Even sprites can overload older computers due to layering of images and the like. The problem with particles is again, sheer numbers (and therefore, mega memory demands). Physics do play a relative significant part here especially when it comes down to interacting with the particles; however, the physics would have to be greatly simplified just to be able to update all the particles.

Thinking seriously on it, I can't think of anything capable of producing realistic 3D particle flow in realtime. Even a super computer can't pull it off without a great deal of time (like in movies). For the time being, sprites are the way to go.
You haven't seen the PhysX screensaver do you? My 8800gt can run it with no problem (40 fps, 25 fps min) and it can move/calulate collisions of 10.000 objects/particles/nodes at the same time, including walls made of actual bricks, bunches of 500 sticks, fluids and cloth. A 8800 GT, GT300 will be 4x-8x more powerfull, probably twice as fast as a GTX285. The 285 can run nowadays games wonderfully and I don't want I don't need more graphics than Crysis does, use the extra performance, comparable to another GTX285 for physx and it's related graphics overhead.

If you have seen the Kthulu PhysX demo there's something called deformable meshes and you don't need to split the polys the way you said. I wasn't talking about indentation yet though, but actual bricks, columns and all. Wood could be like cloth, but with different properties, for example.

You are in the past man, current technology and hardware can make all what I'm saying possible, not to mention the one still to come, the one that Bill Dally has been hired for.

And sprites?? Yeah definately you live in the past. Fluids. <- Pay attention to what happens to the barrel in the water, when the oxygen cylinder falls to the water after he shoots it. Fluids. Fluids.

I don't want to continue this off-topic. GPGPU and PhysX can be done for today's high-end cards easily and the guy has been hired to make it even easier. It's not a matter of "being able", it's a matter of when and "will the others let us do it?".
Posted on Reply
#42
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterYou haven't seen the PhysX screensaver do you?
Elementary. There's zero-damage except for the paper which is extremely simple. I would be impressed if the cans got dented, bricks shattered, and sticks splintered. Moving them about isn't very difficult.
DarkMatterYou are in the past man, current technology and hardware can make all what I'm saying possible, not to mention the one still to come, the one that Bill Dally has been hired for.
That is because we're talking about two different things. I was talking about complex damage/surface fragmentation like seen in Red Faction but performed on-demand. You're talking about undamagable objects bouncing off each other. One's simple, the other isn't.
DarkMatterAnd sprites?? Yeah definately you live in the past. Fluids. Fluids. Fluids.
A sprite is the visual manifestation of a particle. Those videos are pretty impressive; however, the first has a fairly low particle count (difficult to tell if it was rendered real-time or not), the second wasn't rendered in realtime, and the third is just a demonstrator that it works. There is a lot of potential there but it's useless until 60%+ gamers have the hardware necessary to do it.
Posted on Reply
#43
DarkMatter
FordGT90ConceptElementary. There's zero-damage except for the paper which is extremely simple. I would be impressed if the cans got dented, bricks shattered, and sticks splintered. Moving them about isn't very difficult.




That is because we're talking about two different things. I was talking about complex damage/surface fragmentation like seen in Red Faction but performed on-demand. You're talking about undamagable objects bouncing off each other. One's simple, the other isn't.



A sprite is the visual manifestation of a particle. Those videos are pretty impressive; however, the first has a fairly low particle count, the second wasn't rendered in realtime, and the third is just a demonstrator that it works. There is a lot of potential there but it's useless until 60%+ gamers have the hardware necessary to do it.
There is all what you ask for (dents, splinters...) in other demos, you should check all PhysX demos befaore continuing about something you clearly have no idea about, I won't do the work for you, find them yourself. All of those are real time and work on my 8800gt just well 30+ fps all the time. The fluids one, not the game runs at 90+ fps. Fluids are not sprites, are metaparticles and actual deformable geometry. This is my last post about this.

Oh and they not only move, every single particle interacts with each other, but it's difficult to see it in youtube video, in any video.
Posted on Reply
#44
btarunr
Editor & Senior Moderator
FordGT90ConceptElementary. There's zero-damage except for the paper which is extremely simple. I would be impressed if the cans got dented, bricks shattered, and sticks splintered. Moving them about isn't very difficult.
You have all those effects in Warhammer. I sometimes play the game for just the fun in breaking walls, floors and ceilings.
Posted on Reply
Add your own comment
Dec 25th, 2024 10:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts