Wednesday, January 28th 2009

NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

NVIDIA Corporation today announced that Bill Dally, the chairman of Stanford University's computer science department, will join the company as Chief Scientist and Vice President of NVIDIA Research. The company also announced that longtime Chief Scientist David Kirk has been appointed "NVIDIA Fellow."

"I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company," said Jen-Hsun Huang, president and CEO, NVIDIA. "His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world's founding visionaries in parallel computing, he shares our passion for the GPU's evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company."
"I would also like to congratulate David Kirk for the enormous impact he has had at NVIDIA. David has worn many hats over the years - from product architecture to chief evangelist. His technical and strategic insight has helped us enable an entire new world of visual computing. We will all continue to benefit from his valuable contributions."

About Bill Dally
At Stanford University, Dally has been a Professor of Computer Science since 1997 and Chairman of the Computer Science Department since 2005. Dally and his team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing chip which pioneered "wormhole" routing and virtual-channel flow control. His group at MIT built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. He is a cofounder of Velio Communications and Stream Processors, Inc. Dally is a Fellow of the American Academy of Arts & Sciences. He is also a Fellow of the IEEE and the ACM and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks.

About David Kirk
David Kirk has been with NVIDIA since January 1997. His contribution includes leading NVIDIA graphics technology development for today's most popular consumer entertainment platforms. In 2006, Dr. Kirk was elected to the National Academy of Engineering (NAE) for his role in bringing high-performance graphics to personal computers. Election to the NAE is among the highest professional distinctions awarded in engineering. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-performance computer graphics systems to the mass market. From 1993 to 1996, Dr. Kirk was Chief Scientist, Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. Dr. Kirk is the inventor of 50 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology.
Source: NVIDIA
Add your own comment

44 Comments on NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

#1
wolf
Better Than Native
after reading all of that, i think awesome.

this guy seems like a great mind to tap for this kind of product.

t'will be good to see how his input affects Nvidia's products and/or marketing.
Posted on Reply
#2
FordGT90Concept
"I go fast!1!11!1!"
I wonder how much he has to do with NVIDIA being in cahoots with the Folding @ Home project. If he is the primary driving force behind that, I'm done with NVIDIA. I buy graphic cards for games, not pet projects--especially corporate-sponsored projects.
Posted on Reply
#3
Darkrealms
FordGT90ConceptI wonder how much he has to do with NVIDIA being in cahoots with the Folding @ Home project.
I would bet a lot, but then if you think about it. His goals were met by partnering with Nvidia to make folding faster. My folding has been crazy with my GTX260.

This is hopefully good news for Nvidia and better products for us : )
Posted on Reply
#4
FordGT90Concept
"I go fast!1!11!1!"
What this guy is liable to do is remove NVIDIA from the gaming market altogether by striving to increase folding performance. It's already happening too seeing how many people build computers with 2+ NVIDIA cards in it just for folding. I really don't like where NVIDIA is going with this, hence my comment about NVIDIA potentially losing a customer.

I'm just glad Intel is getting ready to enter the market with NVIDIA perhaps leaving.
Posted on Reply
#5
DaedalusHelios
FordGT90ConceptWhat this guy is liable to do is remove NVIDIA from the gaming market altogether by striving to increase folding performance. It's already happening too seeing how many people build computers with 2+ NVIDIA cards in it just for folding. I really don't like where NVIDIA is going with this, hence my comment about NVIDIA potentially losing a customer.

I'm just glad Intel is getting ready to enter the market with NVIDIA perhaps leaving.
Yeah, I really don't want to see the cure to cancer or new treatments for cancer patients if it means comprimising my FPS(frames per second). :laugh:

In all seriousness: I think philanthropic persuits are fine. In fact they should be encouraged considering cancer takes some of our loved ones away from us every passing day. Unless gaming is somehow more important.:wtf:

I think Nvidia is showing they can be a company with heart and trying to be a great graphics card company at the same time. Nothing wrong with that. Try not to be so negative.
Posted on Reply
#6
FordGT90Concept
"I go fast!1!11!1!"
What they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?
Posted on Reply
#7
DaedalusHelios
FordGT90ConceptWhat they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?
I see it as no different than a Solar panel factory to lower energy dependence on coal. To act like all philanthropy cannot turn a profit or is evil if it does is rediculous. Its the life blood of capitalism, but choosing to go into something that benefits us instead of just pure self indulgence as a 100% gaming product would.

If anything it gives a chance for gamers to give a little back to the world. And if you think about it, whats more noble a goal than to try to make the world a better place than it was before by ending the suffering or giving more hope to those in need of a cure. Giving people hope and an outlet to make a difference in a positive way, is never the wrong thing to do.:toast:
Posted on Reply
#8
FordGT90Concept
"I go fast!1!11!1!"
I've been down this road before and it's practically arguing religion ("but it cures cancer!!!!"). There's no sense in continuing.

Cancer is nature's way of saying you've out lived your welcome.
Posted on Reply
#9
DaedalusHelios
FordGT90ConceptI've been down this road before and it's practically arguing religion ("but it cures cancer!!!!"). There's no sense in continuing.

Cancer is nature's way of saying you've out lived your usefulness.
Well, some believe life is more important than to just let it slip away. I like living personally. :cool:
Posted on Reply
#10
DarkMatter
FordGT90ConceptWhat they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?
I think you don't understand what F@H is. No company can build a fast enough supercomputer, Nvidia by pushing GPGPU and F@H, and by teaching GPGPU in universities is doing much more than what a farm of supercomputers can do.
Quote from F@H FAQ:
Why not just use a supercomputer?

Modern supercomputers are essentially clusters of hundreds of processors linked by fast networking. The speed of these processors is comparable to (and often slower than) those found in PCs! Thus, if an algorithm (like ours) does not need the fast networking, it will run just as fast on a supercluster as a supercomputer. However, our application needs not the hundreds of processors found in modern supercomputers, but hundreds of thousands of processors. Hence, the calculations performed on Folding@home would not be possible by any other means! Moreover, even if we were given exclusive access to all of the supercomputers in the world, we would still have fewer computing cycles than we do with the Folding@home cluster! This is possible since PC processors are now very fast and there are hundreds of millions of PCs sitting idle in the world.
EDIT: Just for an easy comparison. Fastest supercomputer Roadrunner has 12,960 IBM PowerXCell 8i CPUs and 6,480 AMD Opteron dual-core processors and a peak of 1.7 petaflops. Now looking at the statistics in these forums I find there are 38,933 members. If only half the members contributed to F@H at the same time, there would be much more power there. Now extrapolate to the world...
FordGT90ConceptCancer is nature's way of saying you've out lived your welcome.
WTF??!!
Posted on Reply
#11
ascstinger
eh, only thing I can weigh in on the F@H deal, is if nvidia cards compromise gaming performance for points, and ATi produces a faster card for the same money, that I would go for the ati card. If they can keep putting out powerful cards that just happen to be good at folding, that's great and I applaud them. If nothing else, why not develop a relatively affordable gpu specifically for F@H that doesnt run up the powerbill to a ridiculous level like running a gtx260 24/7, and then concentrate on gaming with a different card. Then, you can have the option to just grab the gtx for gaming and possible occasional folding, both for gaming and low power use when folding, or just the folding card for someone who doesn't game at all, making the gtx a waste.

There's probably a million reasons why that wouldn't work in the market today, but its a thought for some of us who hesitate due to the power bill it could run up, or being restricted to just nvidia cards
Posted on Reply
#12
DaedalusHelios
A powerful GPU is also a powerful folding@home card. A weak Folding@home card is also a weak GPU... the properties that make a good GPU also make it good for folding if that makes since. Thinking they are seperate things and it might comprimise graphics performance is a non-issue so don't think it will cause a problem.

Provided that software is written to utilize the GPU for folding in the first place. Which in Nvidia's case it is.
Posted on Reply
#13
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterI think you don't understand what F@H is. No company can build a fast enough supercomputer, Nvidia by pushing GPGPU and F@H, and by teaching GPGPU in universities is doing much more than what a farm of supercomputers can do.
GPGPU is fundamentally wrong. Intel's approach is correct in that there's no reason GPUs can't handle x86 instructions. So, don't teach proprietary GPGPU code for NVIDIA's profit in school, teach them how to make GPUs effective at meeting D3D and x86 requirements.
DarkMatterJust for an easy comparison. Fastest supercomputer Roadrunner has 12,960 IBM PowerXCell 8i CPUs and 6,480 AMD Opteron dual-core processors and a peak of 1.7 petaflops. Now looking at the statistics in these forums I find there are 38,933 members. If only half the members contributed to F@H at the same time, there would be much more power there. Now extrapolate to the world...
Those computers have extremely highspeed interconnects which allows them to reach those phenominal numbers; moreover, they aren't overclocked and they are monitored 24/7 for problems making them highly reliable. Lots of people here have their computers overclocked which breeds incorrect results. If that was not enough, GPUs are far more likely to produce bad results than CPUs.

There obviously are inherint problems with Internet-based supercomputing and there's also a whole lot of x-factors that ruin it's potential for science (especially machine stability). Folding especially is very vulnerable to error because every set of work completed is expanded by another and another. For instance, how do we know that the exit tunnel is not the result of an uncaught computational error early on?
DaedalusHeliosA powerful GPU is also a powerful folding@home card. A weak Folding@home card is also a weak GPU... the properties that make a good GPU also make it good for folding if that makes since. Thinking they are seperate things and it might comprimise graphics performance is a non-issue so don't think it will cause a problem.
As was just stated, a 4850 is just as good as the 9800 GTX in terms of gaming but because of the 9800 GTX's architecture, the 9800 GTX is much faster at folding. This is mostly because NVIDIA uses far more transistors which means higher power consumption while AMD takes a smarter-is-better approach using far fewer transistors.

And yes, prioritizing on GPUs leaves much to be desired. I think I recall trying to play Mass Effect while the GPU client was folding and it was unplayable. It is a major issue for everyone that buys cards to game.
Posted on Reply
#14
DarkMatter
FordGT90ConceptGPGPU is fundamentally wrong. Intel's approach is correct in that there's no reason GPUs can't handle x86 instructions. So, don't teach proprietary GPGPU code for NVIDIA's profit in school, teach them how to make GPUs effective at meeting D3D and x86 requirements.
95% of making effective GPGPU code work is knowing parallel computing, the rest is the language itself, so they are indeed doing something well. Ever since Nvidia is inside the OpenCL board they are teaching that too so don't worry, as said that's only the 5%. General computing is no different in that way, 95% of knowing how to program nowadays is knowing how to program with objects. If you know how to program with C++ for example, you know programming with the rest.

The same aplies to x86. The difficulty relies on making the code highly parallel. x86 is NOT designed for parallelism and is as difficult making a highly parallel computing program in x86 as doing it in GPGPU codes.

This BTW was said by Standford guys (maybe even this same guy) BEFORE Nvidia had any relations with them. When GPGPU was nothing else than Brook running in X1900 Ati cards so...
Those computers have extremely highspeed interconnects which allows them to reach those phenominal numbers; moreover, they aren't overclocked and they are monitored 24/7 for problems making them highly reliable. Lots of people here have their computers overclocked which breeds incorrect results. If that was not enough, GPUs are far more likely to produce bad results than CPUs.

There obviously are inherint problems with Internet-based supercomputing and there's also a whole lot of x-factors that ruin it's potential for science (especially machine stability). Folding especially is very vulnerable to error because every set of work completed is expanded by another and another. For instance, how do we know that the exit tunnel is not the result of an uncaught computational error early on?
False. GPGPU is as prone to errors as supercomputers are, they doublecheck the data is correct in the algorithms. Even if that takes more computing time, reducing efficiency, beause the seer computing power of F@H is like 1000 times that of a supercomputer that means squat.

A GPU does not make more errors than a CPU anyway. And errors resulted from OC yield highly unexpected results that are easy to detect.

Anyway F@H is SCIENCE, do you honestly believe they only send each algorithm to a single person?? The have 1000's of them and they know which one is well and which not. :laugh:
Posted on Reply
#15
DarkMatter
FordGT90ConceptAs was just stated, a 4850 is just as good as the 9800 GTX in terms of gaming but because of the 9800 GTX's architecture, the 9800 GTX is much faster at folding. This is mostly because NVIDIA uses far more transistors which means higher power consumption while AMD takes a smarter-is-better approach using far fewer transistors.

And yes, prioritizing on GPUs leaves much to be desired. I think I recall trying to play Mass Effect while the GPU client was folding and it was unplayable. It is a major issue for everyone that buys cards to game.
G92 (9800 GTX) has much less transistors than RV770 (HD4850) FYI. And the 55nm G92b variant is significantly enough smaller too 230mm^2 vs 260mm^2.

Of course folding at the same time reduces performance, but the fact that GPGPU exists doesn' make the card slower. :laugh:

Next stupid claim??
Posted on Reply
#16
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterThe same aplies to x86. The difficulty relies on making the code highly parallel. x86 is NOT designed for parallelism and is as difficult making a highly parallel computing program in x86 as doing it in GPGPU codes.
Intel is addressing that.
DarkMatterFalse. GPGPU is as prone to errors as supercomputers are, they doublecheck the data is correct in the algorithms. Even if that takes more computing time, reducing efficiency, beause the seer computing power of F@H is like 1000 times that of a supercomputer that means squat.

A GPU does not make more errors than a CPU anyway. And errors resulted from OC yield highly unexpected results that are easy to detect.

Anyway F@H is SCIENCE, do you honestly believe they only send each algorithm to a single person?? The have 1000's of them and they know which one is well and which not. :laugh:
F@H doesn't double check results.

What happens when a CPU errors? BSOD
What happens when a GPU errors? Artifact

Which is fatal, which isn't? CPUs by design are meant to be precision instruments. One little failure and all goes to waste. GPUs though, they can work with multiple minor failures.

I got no indication from them that any given peice of work is completed more than once for the sake of validation.


No, errors aren't always easy to catch.
Float 2: 00000000000000000000000001000000
Float 4: 00000000000000001000000001000000

If the 17th digit got stuck, every subsequent calculation will be off by a minute amount. For instance:
Should be 2.000061: 00000000000000010000000001000000
Got: 4.0001221: 00000000000000011000000001000000

Considering F@H relies on a lot of multiplication, that alone could create your "exit tunnel."
DarkMatterG92 (9800 GTX) has much less transistors than RV770 (HD4850) FYI. And the 55nm G92b variant is significantly enough smaller too 230mm^2 vs 260mm^2.
9800 GTX = 754 million transistors
4850 = 666 million transistors

Process doesn't matter except in physical dimensions. The transistor count only changes with architectural changes.
DarkMatterOf course folding at the same time reduces performance, but the fact that GPGPU exists doesn' make the card slower. :laugh:
It's poorly executed and as a result, CUDA is not for gamers in the slightest.
Posted on Reply
#18
DarkMatter
FordGT90ConceptIntel is addressing that.



F@H doesn't double check results.

What happens when a CPU errors? BSOD
What happens when a GPU errors? Artifact

Which is fatal, which isn't? CPUs by design are meant to be precision instruments. One little failure and all goes to waste. GPUs though, they can work with multiple minor failures.

I got no indication from them that any given peice of work is completed more than once for the sake of validation.


No, errors aren't always easy to catch.
Float 2: 00000000000000000000000001000000
Float 4: 00000000000000001000000001000000

If the 17th digit got stuck, every subsequent calculation will be off by a minute amount. For instance:
Should be 2.000061: 00000000000000010000000001000000
Got: 4.0001221: 00000000000000011000000001000000

Considering F@H relies on a lot of multiplication, that alone could create your "exit tunnel."
It's SICIENCE so of course they have multiple instances of the same problem. They don't have to say that because they are firstmost and ultimately scientists working for scientists.

EDIT: Anyway, I don't know you, but every math program I made at school, doeble checked the results by redundancy, I was teached to do it that way. I expect scientists working to cure cancer received an education as good as mine, AT LEAST as good as mine.
EDIT: Those examples are, in fact, easy to spot errors. Specially in F@H. If you are expecting the molecule to be around the 2 range (you know what to expect, but it's science, you want to know EXACTLY where will it be) and you got 4, well you don't need a high grade to see the difference.
9800 GTX = 754 million transistors
4850 = 666 million transistors
WRONG. RV670 has 666 m transistors. RV770 has 956 m transistors. source
source

Don't contradict educated facts without doublechecking your info PLEASE.
Process doesn't matter except in physical dimensions. The transistor count only changes with architectural changes.
So now you are going to teach me that?? :laugh::laugh:
It's poorly executed and as a result, CUDA is not for gamers in the slightest.
Of course it's not for games (except for PhysX). But it doesn't interfere at all with games performance.
Posted on Reply
#19
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterIt's SICIENCE so of course they have multiple instances of the same problem. They don't have to say that because they are firstmost and ultimately scientists working for scientists.
Pande is a chemical biologist. How much he cares about computational accuracy remains to be seen.
DarkMatterWRONG. RV670 has 666 m transistors. RV770 has 956 m transistors. source
source
So, Tom's Hardware is wrong. That doesn't change the fact that F@H prefers NVIDIA's architecture.
DarkMatterOf course it's not for games (except for PhysX). But it doesn't interfere at all with games performance. Intel's Larrabee, x86 or not, won't help at games neither. In fact there's no worse example than Larrabee, for what you are trying to say. There'd not be a GPU worse at gaming than Larrabee.
That makes a whole lot of no sense so I'll respond to what I think you're saying.

-NVIDIA GeForce is designed specifically for Direct3D (or was).
-CUDA was intended to offload any high FLOP transaction from the CPU. It doesn't matter what the work actually is comprised of.
-CUDA interferes enormously with game performance because it's horrible at prioritizing threads.
-Larrabee is a graphics card--but not really. It is simply designed to be a high FLOP, general purpose card that can be used for graphics among other things. Larrabee is an x86 approach to the high-FLOP needs (programmable cores).

Let's just say CUDA is riddled with a lot of problems that Larrabee is very likely to address. CUDA is a short-term answer to a long term problem.
Posted on Reply
#20
DarkMatter
FordGT90ConceptPande is a chemical biologist. How much he cares about computational accuracy remains to be seen.
I can see your disbelief about science, but I don't condone it. Scientists know how to make their work, assuming they don't is plainly stupid.
So, Tom's Hardware is wrong. And? That doesn't change the fact that F@H prefers NVIDIA's architecture.
Yeah, it prefers Nvidia's architecture because Nvidia's GPUs where designed with GPGPU in mind. I still see Nvidia on top in most games. So?
That makes a whole lot of no sense so I'll respond to what I think you're saying.

-NVIDIA GeForce is designed specifically for Direct3D (or was).
-CUDA was intended to offload any high FLOP transaction from the CPU. It doesn't matter what the work actually is comprised of.
-CUDA interferes enormously with game performance because it's horrible at prioritizing threads.
-Larrabee is a graphics card--but not really. It is simply designed to be a high FLOP, general purpose card that can be used for graphics among other things. Larrabee is an x86 approach to the high-FLOP needs (programmable cores).

Let's just say CUDA is riddled with a lot of problems that Larrabee is very likely to address. CUDA is a short-term answer to a long term problem.
- Nope, they are designed for GPGPU too. Oh and strictly talking, I don't really know of there was a time when Nvidia GPUs where focused at D3D. It's been more focused on OpenGL, except maybe the last couple of generations.
- Yes and I don't see where you're going with that.
- Unless you want to use CUDA for PhysX, CUDA doesn't interfere with gaming AT ALL. And in any case, Nvidia has hired this guy to fix those kind of problems. It's going to move to MIMD cores too, so that thing is going to be completely fix in the next generation of GPUs.
- Yes, exactly.

Many people think that GPGPU is the BEST answer for that, and they all of them don't work for Nvidia. In fact, many work for Ati.
Posted on Reply
#21
Haytch
I dont think we should shove aside the important factors here;
For starters, Anyones efforts to do humanity a favour, especially of this magnitute should be respected, regardless of belief's, unless you wish the Terran race extinction ofcourse. But thats because good and evil does exist regardless if religion does or not.

. . . . If CUDA doesnt increase f.p.s, nor does it decrease it. Then that equals even.
. . . . If CUDA does ANYTHING. Then thats a plus.

Darkmatter, thank you for explaining to those out there that cant comprehend, but unfortunately i think its fallen on blind hearts . . . Oh wait a minute, all of out hearts are blind . . . Maybe i meant cold hearted.

Anyways, im going to go and take out my graphics cards and play Cellfactor @ 60+ f.p.s with just the Asus Ageia P1.

Edit : Ohh ye, almost forgot. I want to know how much Bill and David are on p.a. I bet the x-Nvidia staff would like to know too.
I dont think either Bill or David have much more to offer Nvidia, and i dont think they will bother either. Good luck to the green team.
Posted on Reply
#22
FordGT90Concept
"I go fast!1!11!1!"
HaytchIf CUDA doesnt increase f.p.s, nor does it decrease it. Then that equals even.
If something is using CUDA while a game is running, it hurts the game's FPS badly.
Posted on Reply
#23
DarkMatter
FordGT90ConceptIf something is using CUDA while a game is running, it hurts the game's FPS badly.
But NOTHING forces you to use CUDA at the same time, that's the point. When you are gaimng dissable F@H, of course!! But when you are not using the GPU for anything you can fold, and with GPU2 and Nvidia card you can fold MORE. It's simple.

And if you are talking about PhysX, take in mind that the game is doing more, so you get more for more, not the same while requiring more as you are suggesting. If it comes a time when GPGPU is used for say AI, then the same will be true, you will get more than what the CPU alone can do while mantaining more frames too, because without the GPU it would be unable to provide enough frames with that kind of detail. That's the case with PhysX and that will be the case with any GPGPU code used in games.
Posted on Reply
#24
FordGT90Concept
"I go fast!1!11!1!"
DarkMatterI can see your disbelief about science, but I don't condone it. Scientists know how to make their work, assuming they don't is plainly stupid.
Just because you can use a computer doesn't mean you understand how it works. Likewise, just because Pande wants results for science doesn't mean he knows the best way to go about them from a computing standpoint.
DarkMatterMany people think that GPGPU is the BEST answer for that, and they all of them don't work for Nvidia. In fact, many work for Ati.
All I know is that the line between GPU and not is going away. There's more focus on the FLOPs--doesn't matter where it comes from in the computer (on the CPU, on the GPU, on the PPU, etc.).

But then again, FLOPs for mainstream users aren't that important (just for their budgeting). It is kind of ackward to see so much focus on less than 10% of a market. Everyone (AMD, Intel, AMD, Sony, IBM, etc.) are all pushing for changes to the FPU when ALU needs work too.
DarkMatterBut NOTHING forces you to use CUDA at the same time, that's the point. When you are gaimng dissable F@H, of course!! But when you are not using the GPU for anything you can fold, and with GPU2 and Nvidia card you can fold MORE. It's simple.
F@H should be smart enough to back off when the GPU is in use (the equivilent of low priority on x86 CPUs). Until they fix that, it's useless to gamers.


Regardless, I still don't support F@H. Their priority is in results, not accurate results.


Physx is useless.


The problem with GPGPU is the GPU is naturally a purpose-built device: binary -> display. Any attempts to multitask it leads to severe consequences because it's primary purpose is getting encroached upon. The only way to overcome that is multiple GPUs but then they really aren't GPUs at all because they aren't working on graphics. This loops back into what I said earlier in this post that the GPU is going away.
Posted on Reply
#25
DarkMatter
ALL the problems that GPGPU has NOW can be fixed. Nvidia has DEFINATELY hired this guy, the father or one of the fathers of stream computing, for that purpose. Don't talk about the issues of GPGPU in a news thread that is presenting the guy that will fix them, or that has been hired to fix them.

Regarding F@H IMO you are not qualified to make any critic to their methodology and how accurate it is or not. Your lack of respect for (and I dare to say aknowledge of) the Scientific Methology is evident.

PhysX is not useless at all. You might not like it, you might not need it, you might not want it, but it's the first iteration of what will be the next revolution in gaming. I like it, I want it I NEED it, and like me millions of people.
Posted on Reply
Add your own comment
Dec 25th, 2024 21:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts