Thursday, August 28th 2008
Radeon and GeForce Share Work, PhysX Applications Win
The functionality of CUDA and its implementation of GPU-accelerated PhysX processing has benefited many a GeForce user. Users of ATI accelerators lacking this incentive either use Ageia PhysX card or avoid it altogether. It has been verified by Hardspell that in an environment where Radeon accelerator(s) do graphics processing, a GeForce accelerator can be used standalone to process PhysX. Hardspell used a Radeon HD 3850 along with a GeForce 9600 GT on the same system with the display connected to the Radeon, though no form of multi-GPU graphics connection existed, the GeForce card partnered the Radeon well in processing physics, while the Radeon did graphics. Results of the oZone 3D FluidMark, a benchmark that includes routines to evaluate the machine's capability in processing physics, showed a greater than 350% increase in scores, showing that the GeForce accelerator is doing its job.This was further proved with game testing of Unreal Tournament III. Provided are screen-shots from the game along with those of the FluidMark windows. The first window shows a score of 759 o3marks, while the second window in which GeForce processed PhysX, the score jumped to 2909 o3marks.
Source:
Hardspell
144 Comments on Radeon and GeForce Share Work, PhysX Applications Win
Someone will discover how it works eventually.
But either way, if a game uses the one developed by AGEIA, PhysX, like the unreal3 engine, then it doesn't matter if your card has great HAVOK support since it's PhysX that the game requires. And right now I bet lots of developers are opting for the PhysX one since it suddenly has a lot of people that can use it, unless of course they are smart like the crysis makers and just make their own physics engine and bypass all the hassle :) Although doing that on the GPU might be harder to develop than you think *shrug*
i have a board with 3 full length pcie slots, 4870x2 and 4870 in CFX and i am thinking about getting a 9800gt for physx. the reason for the 9800gt is because it's the fastest single slot card i can think of and it will work perfectly in between my 2 ATI beasts.
is this a good move for me or should i do something else with my time/money?
i can get a 9800gt for like $40 so it's not a price thing... just availability.
And then it's the $40 argument. If he can get a 9800GT for that money I would never never tell him to get something slower. He could even had to pay more for a GS!!
If he isn't getting a 8800 GS for less than $40 bucks, 9800 GT is cool.
heat.... fuck heat... who cares anyway? it's for benching and i have some 130cfm fans i use to cool the vid cards anyway.
- Price: IMHO, a 9800GT for $40 is must have. Period. :D
-Power consumption: the difference between both cards is 10W. The X2 alone consumes 300W. Add 150W for the HD4870 and 120W for the rest of the system, as well as 75 for the baseline card (GS) and we are talking about 645W under full load. 645 or 655 who cares?
- Performance: probably in the next year a GT will not get you better performance than the GS, maybe not even in 2 years? Who cares? You won't need to change the card. IMO you can't apply the same criteria as with graphics, where a little underpowered card makes more sense because you will need to upgrade it soon anyway. And if we take into account the price, are you really going to risk the future performance, or the possibility that you will need to upgrade the card a lot sooner in order to get a cheaper or a bit less power hungry card?
not trying to dog on your btarunr