Thursday, August 28th 2008
Radeon and GeForce Share Work, PhysX Applications Win
The functionality of CUDA and its implementation of GPU-accelerated PhysX processing has benefited many a GeForce user. Users of ATI accelerators lacking this incentive either use Ageia PhysX card or avoid it altogether. It has been verified by Hardspell that in an environment where Radeon accelerator(s) do graphics processing, a GeForce accelerator can be used standalone to process PhysX. Hardspell used a Radeon HD 3850 along with a GeForce 9600 GT on the same system with the display connected to the Radeon, though no form of multi-GPU graphics connection existed, the GeForce card partnered the Radeon well in processing physics, while the Radeon did graphics. Results of the oZone 3D FluidMark, a benchmark that includes routines to evaluate the machine's capability in processing physics, showed a greater than 350% increase in scores, showing that the GeForce accelerator is doing its job.This was further proved with game testing of Unreal Tournament III. Provided are screen-shots from the game along with those of the FluidMark windows. The first window shows a score of 759 o3marks, while the second window in which GeForce processed PhysX, the score jumped to 2909 o3marks.
Source:
Hardspell
144 Comments on Radeon and GeForce Share Work, PhysX Applications Win
If a dedicated PPU was a must then both Intel and AMD would have taken that route long ago, nVidia doesn't have a CPU/GPU platform to run on.
The initial hybrid CPU/GPU chips will have a relatively weak GPU, but the integrated GPU can be used to process in-game physics only, the dedicated graphics card handles everything else.
As months pass it will be even better for the GPU solution. Nehalem is only 30% faster than Core2 clock for clock so it won't improve performance by much. How higher do you think they will go in clocks? On the other hand we know we will have the 9600GT/GSO even 9800GT or comparable cards in the $50-$100 segment really soon. 1.3X a current Quad for $200+ or 2X-3X a current low-end card? :rolleyes:
EDIT: Intel didn't bought Havok to optimize it for their CPUs. They are the same Havok team who are doing the work and won't be able to optimize much further than what they did before. Anyway, Intel bought Havok to implement it on Larrabee or derivated enbedded solutions. So that's again Physics on a GPU, but Intel's will not come until 2010. Ageia is now.
It was dead before nVidia bought up the dying Ageia company, and you are forgetting AMD's Fusion is only around the corner costing little more to integrate into the their CPU cores.
The industry has ignored the Ageia PhysX engine for years now, Havoc was bought BTW so intel can implement it on their Integrated GPU and a far cheaper solution than a dedicated card, hint hint.
The new Havoc hybrid CPU/GPU physics engine will still be the industry standard full stop.
2- The industry might have ignored Ageia IN THE PAST. More than 50 titles coming in the next months doesn't sound like they are ignoring it. I wouldn't be surprised if Ageia has more titles than Havok right now.
3- We don't know if Havok will continue in the lead. It probably isn't right now, let alone in the future. As I said NO "GPU" Havok (will always be based on x86, so sorry Ati) until Larrabee and that's 2010. Also the physics API used in the PS3 is PhysX, AFAIK XBox360 doesn't have any one defined. That makes a lot easier for developers to use Ageia for cross-platform compatibility.
Only time will tell, but as far as better physics is the concern, PhysX is the only way to go until 2010: Larrabee, DX11. And then it's going to be very easy for Nvidia/Ageia to move to DX11 physics. Nvidia has physics now, Intel and AMD only have promises for the future. And those promises are not about making something better, they just promise they will have the same as Ageia has right now.
And the performance of Havoc on an Intel integrated GPU will never approach the performance capabilities of Physx on a dedicated NV card. Hell, it will never approach the performance of Physx running on Integrated NV graphics.
Incidentally, I'm not sure why people buy that physics stuff anyway, I think newton put it in public domain back when.., basically any programmer should be able to code in some physics, and as for physics on the GPU, well the old GPU's were new and complex to put stuff like that on, but now that the world of gameprogrammers knows a lot about the GPU and the GPUs expanded their capabilities and much more documentation and examples are available I'm guessing it can't be that hard anymore to put your physics calculations on the GPU.
One thing is to put some known formulas into computer code, and another thing is to make a good (usable and complete) engine. Then the next level is to make it perform on real time. And one more step is where Ageia and Havok are.
Recently I decided it was the time to deal a bit with OpenGL programming, even though I don't like programming, just to use the coding "skills" I gathered at the uni and understand everything related to graphics a lot better. Just in one afternoon looking at it's internals (after another one remembering C/C++), I am pretty sure how to do a renderer. It's "easy". As easy as to know how to make a painting or an sculpture. Will I ever do a painting or an sculpture or an usable renderer? I'm going to give it a try, but don't have big hopes. You get the idea?
And I know people who started programming from scratch and guess what? they added 'physics' by using the simple well know formulas from pythagoras and newton etcetera every kid learns on school, and it worked out fine, if you code an engine, which would be the people that buy licences of havok, then the added difficulty of 'physics' in minor really, you already define objects with 3d coordinates so to have some basic formulas like the simple gravity one in their behaviour isn't that high-brow really and I still don't see why you'd have to buy 3rd party stuff for it, in fact neither did the people that made crysis, they just added their own routines.
As for adding it to hardware, you think the intel engineers can't figure out basic trigonometry and the laws of physics? And what do you think is in those SSE[x] instructions anyway?
Plus to make ANY kind of game engine you must have collision detection and bounce andsoforth, that's not optional, not even in a 2D game.
"Anyone" can program it, of course. But a completely different thing is to make a competent product. Why do you think people use 3rd party engines/renderers? Do you honestly believe a graphics engine is more complicated? IMO NO, that's why I said what I said in my previous post.
Also you have to put a lot of resources (money, man/time power etc.) into it. 3rd party middleware has only need to be done once and the expenses of making and updating it are paid by many developers , and not just one (when buying it, of course). Hell, by your logic, with the things I learnt at school/uni I could build my own house, almost everybody could, but we let architects and engineers do our houses don't we? There's a reason for that, and there is a reason for middleware (be it physics or game engines) to exist.
EDIT: And yes, Intel (Havok) can also do a good engine, they already have a good one. BUT as long as it is run on software it will never compete with Ageia. I mean it's simple to understand IMO.
Anyway the better answer it would have been. If anyone (I mean, let's say Intel, Nvidia) can do their own competitive engine why did they bought the physics companies? Intel with ALL THE MONEY and all the resources prefered to buy Havok instead. Think about that.
And no you aren't convincing me, sure it costs a little extra time(and therefore money) to get good physics in an engine, but you could just as easily argue 'why code an engine, why not simply buy the unreal one or something', point for me is that there's no need to buy physics for it perse, it's an option and it cuts some time, but that's all, it's not superhard to get your own physics going with the advantage that you are in control and know what you are doing and can add your own twist, and sell your engine to 3rd parties WITH physics without having to tell your customers 'Oh btw, you have to pay some other company money for the physics part of our engine', or having to buy a very expensive license that allows re-sale as part of your engine.
And yes, I think in fact a graphics engine from scratch IS more complicated than the physics part of an engine.
BTW then post here some betas, I'm sure most people here are willing to be your indian pigs for testing an engine made by a member. I'm already listed, so send me that beta first! ;)
EDIT: Industry heavyweights as John Carmack, Tim sweeny, Tom Hall, Gabe Newell, George Broussard, Sid Meier (and a very long list of other ones that don't come to mind right now) make their own game engines, but all of them license third party physics. If you manage to make one, you can be proud of being smarter than them, so kudos.
You are right, I am wrong. 95% of game developers are wrong. Are nothing more than a bunch of retarded bunnies, who know nothing about the bussiness in which they are working for more than 20 years now.
Well at least you opened my eyes. Please can you give me some clues as to what is the better way to make my own cloth, furniture and tools? Because I'm almost sure at this point that you make all those things for your own, since they are so much easier to do than the things we are talking about... I am sure you don't care to spend half an hour everyday for that purposes. Yeah now I understand, why would I spend money on those things when I can make them myself, I've been so stupid until now...
And that 97% of the people agree with me?
If you didn't now you know :P