# PhysX only using one cpu core?



## MrMilli (Sep 27, 2009)

http://techreport.com/articles.x/17618/13

http://forums.overclockers.co.uk/showpost.php?p=14687943&postcount=4

I'll quote:


> You can see the performance hit caused by enabling PhysX at this resolution. On the GTX 295, it's just not worth it. Another interesting note for you... As I said, enabling the extra PhysX effects on the Radeon cards leads to horrendous performance, like 3-4 FPS, because those effects have to be handled on the CPU. But guess what? I popped Sacred 2 into windowed mode and had a look at Task Manager while the game was running at 3 FPS, and here's what I saw, in miniature:
> 
> 
> 
> ...



In Batman demo:


> PhysX is only using one core on the CPU in that game so that maybe contributing to the slowdown.


----------



## erocker (Sep 27, 2009)

I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.


----------



## Benetanegia (Sep 27, 2009)

I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?

What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??

No, friends, what it makes more sense is to make 2 paths, one that will run in almost everything there, and one that uses GPU accelerated physics on GeForces which by market share is 2/3 of the graphics cards out there. Hence it uses only 1 CPU core, because:

1- the GPU is much more powerful anyway, the power that even a Quad would add is irrelevant.
2- It can run on everything out there, including old dualies, whose second core could be full of secondary processes, etc.

Enabling GPU accelerated physics and expecting to see CPU load, when that's highly unnecesary, is dumb IMHO. I like TechReport like any other, it's the second site I go looking for reviews after TPU, but there they are just being a little bit dumb.


----------



## Steevo (Sep 27, 2009)

Benetanegia said:


> I don't understand what they expect exactly. If one core is giving 3 fps, 8 cores would give 24 fps, still very far from a good experience. And that's on a fast Core i7. i7 920 would give what, 20 fps? And what about Core 2, which is what most people have? i7 has like 50% more raw floating point power than Core2 per core and with only 4 cores, we would be moving in the 10-12 fps realms. Want to start talking about dualies, which is what most people still have?
> 
> What do they expect? That developers create 8 different paths with different level of detail and thread penetration, so that every machine without a GeForce can have increasingly better physics, that will be still very far off from what the weakests of GeForces can do??
> 
> ...



Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.



The process mamager isn't even close to maxed out on one CPU HT core, let alone any more of the eight HT cores available. It looks to be using spikes of CPU power, and only on one HT core. The point of the article is to show that Nvidia is causing the game to run poorly, and the game developer is causing issues by allowing such crap to be produced. 


So if one HT core is giving 3 FPS times 8 HT cores, 24 FPS, the same framerate many consoles are limited too. But lets just throw that out and focus instead on how to insult other members.


----------



## [I.R.A]_FBi (Sep 27, 2009)

jah know nv at it again ...


----------



## Benetanegia (Sep 27, 2009)

Steevo said:


> Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.
> 
> 
> 
> ...



Nvidia is *not* causing poor performance, enabling a mode that is only suppossed to run on the GPU is.

If you disable the GPU accelerated PhysX the game will play like a charm. Again you can't expect to enable *GPU accelerated physics mode* and expect to see CPU load, you can't. You can't expect to have the same improved PhysX running on the CPU either, simply because CPU can't keep up with the power needed. Not even the i7 965 could keep up let alone a Core2 6400, for example. They are saying 14%, that's more than 1 "core" in the i7. 100/8= 12.5.

BTW, *any CPU* will show those spikes in single threaded applications. That's because it uses the next ALU available and that's always the one in the next core, but since it's single threaded it has to wait to the previous one to finish, accounting to the equivalent of one core being in use. That method increses reliability and improves temperatures too.

And like erocker said it was AMD who refused to use PhysX, Nvidia gave it to AMD for free. They refused for obvious reasons, none of them being to do the best for the consumer. They didn't even allow third parties to do the work, even though Nvdia supported them.

http://www.bit-tech.net/news/2008/07/09/nvidia-helping-to-bring-physx-to-ati-cards/1



> However, an intrepid team of software developers over at NGOHQ.com have been busy porting Nvidia's CUDA based PhysX API to work on AMD Radeon graphics cards, *and have now received official support from Nvidia* - who is no doubt delighted to see it's API working on a competitor's hardware (as well as seriously threatening Intel's Havok physics system.)
> 
> As cheesed off as this might make *AMD, which is unsurprisingly not supporting NGOHQ's work*, it could certainly be for the betterment of PC gaming as a whole. If both AMD and Nvidia cards support PhysX, it'll remove the difficult choice for developers of which physics API to use in games. We've been growing more and more concerned here at bit-tech at the increasingly fragmented state of the physics and graphics markets, and anything that has the chance to simplify the situation for consumers and developers can only be a good thing.


----------



## FordGT90Concept (Sep 27, 2009)

Steevo said:


> Suck some NV elsewhere. Multi-threaded applications are no longer the odd man out in a age of 12 core processors and common machines having at least two.
> 
> 
> 
> ...


I agree.  Physics calculations are very linear in design so it should multithread excellently.  NVIDIA decided to focus on PhysX for GPU only so without the GPU, PhysX is a massive burden being poorly optimized for CPU load.  Most games don't even use physics enough to warrant using a GPU anyway.  I question everything about PhysX (the premise, the execution, the strategy, etc.).

DirectX 11 will murder PhysX because it will work on CPU and GPU.  Instead of using, for instance, Havok for CPU and PhysX for GPU, just use DX11 and be done with it.


----------



## MrMilli (Sep 27, 2009)

erocker said:


> I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.



I don't know why everybody keeps saying this.
If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.

My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. Havok uses multiple cores BTW.


----------



## FordGT90Concept (Sep 27, 2009)

Nothing is multithreaded easily but there's a lot of things that benefit greatly by doing so--physics for games is one of them.


----------



## Benetanegia (Sep 27, 2009)

Honestly, people should learn the difference between PhysX and GPU accelerated PhysX and stop bashing something they don't know about.

EDIT: When I say people I'm not directing this to anyone in particular. There's a lot of people in TPU and outside of it that don't understand that difference.

PhysX is a physics API that can run on the CPU like Havok or any other, but it has one particularity: it can run on the GPU too, allowing for much more details thanks to the vastly superior floating point power of GPUs.

http://www.brightsideofnews.com/new...-bullet3b-are-havok-and-physx-in-trouble.aspx



> In case you wondered, according to Game Developer Magazine *world's most popular physics API is nVidia PhysX, with 26.8% market share* [if there was any doubt that nVidia PhysX isn't popular, which was defense line from many AMD employees], followed by Intel's Havok and its 22.7% - but Open sourced Bulled Physics Library is third with 10.3%.
> We have no doubt that going OpenCL might be the golden ticket for Bullet. After all, Maxon chose Bullet Physics Library for its Cinema 4D Release 11.5 [Cinebench R11 anyone?].



A lot of games use PhysX nowadays, to name a few, Unreal Tournament (without the add-on too), Gears of War, Mass Effect, NFS:Shift. All of them use the CPU PhysX and I have yet to see a complain about those games.

What less sense makes to me is that the same people that support OpenCL physics, because it will make developers life easier by them having to make only one physics path, are asking developers to make a physics engine with very different paths so that it can run on different CPUs and design game with very different levels of physics, the latter thing being the one that would add more work. An experimented coder can implement a physcs engine in under a week (optimizing it is another story), but changing the level of detail of the physics on all the levels (maps, whatever) takes months to a lot of artists.

EDIT: As much as this might surprise you, it would be much easier for a developer to mantain the level of physics details and code the engine for two different languages (CUDA and Stream for example) to move those physics than creating two different levels.



MrMilli said:


> I don't know why everybody keeps saying this.
> If you mean compute shaders then it won't matter until someone actually uses it for a physics engine. Compute shaders by itself is nothing more than what OpenCL & CUDA are.
> 
> My opinion about this matter is: what can be threaded easily, should be threaded. No excuses. I mean, the CUDA version is obviously already threaded, isn't it ...
> Doesn't matter how you look at it, nVidia is holding the CPU version back in favour of the CUDA version. *Havok uses multiple cores BTW.*



I doubt it. Unless they make different paths, with different requirements a developer can't make the engine use many cores by defalut, because it wouldn't run in slower CPUs. I don't remember having a physics slide in any game using Havok either, but it could be.


----------



## TheLaughingMan (Sep 27, 2009)

erocker said:


> I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.



Not sure how it all went down, but I do know ATI did not refuse.  They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers.  This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off.  The ATI put a lot of effort behind helping Havok as a purely software Physics solution.

How is Havok doing?  I haven't heard anything from them in a while.


----------



## KainXS (Sep 27, 2009)

Havok is still used in many games today, many more than physx is

The problem with physX is that even though you can run it with a cpu, on newer games like batman, try turning it up and see what happens, either A. you will get horrible fps unless you have a very good ATI card or B. It will crash.

and if you have an ATI card on 7 or higher your kinda screwed because in newer drivers nvidia has killed support if you have an ATI card period. You have an ATI card and an nvidia card then your not getting physx acceleration even if your nvidia card supports it unless its the primary adapter which renders the ATI card useless.

and what does nvidia say about this


> Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs.



and the ones who get hurt the most by this are not the customers, I think its the developers.

but you have a 8800GTS so I am confused as to why its not working right.


----------



## MilkyWay (Sep 27, 2009)

TheLaughingMan said:


> Not sure how it all went down, but I do know ATI did not refuse.  They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers.  This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off.  The ATI put a lot of effort behind helping Havok as a purely software Physics solution.
> 
> How is Havok doing?  I haven't heard anything from them in a while.



Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
Wait yes it is its an Irish company, but they are owned by intel i think?


----------



## Benetanegia (Sep 27, 2009)

TheLaughingMan said:


> Not sure how it all went down, but I do know ATI did not refuse.  They tried to get Nvidia to agree to 1 system for Physics calculations and leave it open to the developers.  This would allow Physics to simply be a part of game programming and would run on everything capable of it, but Nvidia didn't see a reason to agree to this so they told ATI to piss off.  The ATI put a lot of effort behind helping Havok as a purely software Physics solution.
> 
> How is Havok doing?  I haven't heard anything from them in a while.



What AMD was asking for was to leave PhysX all along and postpone GPU accelerated physics until there was an open API that would run on everything. They did say that Nvidia could offer PhysX to an open stardarization company (i.e Khronos) and then they would "use PhysX" although indirectly. Obviously Nvidia didn't want to wait 2+ years to offer something they could offer back then, so they continued with PhysX through CUDA, while working closely with Khronos to make OpenCL and MS on DX11. What I still don't understand is how they support Havok without the need of any standardization progress.



KainXS said:


> Havok is still used in many games today, many more than physx is.



Nope, read post #10. Before reading that a week ago or so, I thought Havok was more widely used too. But it's not. PhysX is much much cheaper for developers than Havok BTW. It's even free if you don't want access to thsource code...


----------



## TheLaughingMan (Sep 27, 2009)

KainXS said:


> 8800GTS so I am confused as to why its not working right.



It may be disabled in the Nvidia Control Panel as it is by default.



MilkyWay said:


> Havok is owned by Intel its not like a separate thing or anything, its just software owned by Intel. Its pretty basic most games use it, its nothing like proper Physics.
> Wait yes it is its an Irish company, but they are owned by intel i think?



Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics.  It will just be design to run on whatever GPU you are using in your system.  That project was a joint ventor between Havok, Intel, and AMD/ATI.  It was called Havok FX.


----------



## Benetanegia (Sep 27, 2009)

TheLaughingMan said:


> It may be disabled in the Nvidia Control Panel as it is by default.
> 
> 
> 
> Not sure who owns them, but if they come through on the Physics calculation plugin thingy they were talking about last time I checked on Havok, it will be proper physics.  It will just be design to run on whatever GPU you are using in your system.  That project was a joint ventor between Havok, Intel, and AMD/ATI.  It was called Havok FX.



Havok FX was another thing. Was effects physics on Ati and Nvidia GPUs, before Intel bought Havok (guess why) and much earlier than Nvidia bought Ageia. Havok FX was the respnse from ATi and Nvidia to Ageia's PPU. Then Intel bought Havok and everything became, how to say... cloudy. 

Havok=Intel and AMD are working on GPU accelerated physics, but it's not called Havok FX unless they reused the name for something that is almost completely different. GPU physics = PhysX = whatever that project is called != effects physics. Effect physics are the ones in which there's no interactions, that is, you could break something in hundred of pieces (that would trigger a change from a solid to a bunch of particles) and those would fall to the floor realistically and maybe even the wind or something could deviate them, but once on the floor the player wouldn't be able to move them, nothing would.

Havok=Intel is hardly the answer anyway. In making a fair engine that runs well on everything, I would trust Nvidia over Intel anyday, remember who's been paying and blackmailing PC vendors to not use competitors products. Besides Larrabee will be so different that no doubt they will optimize Havok to run in that architecture much much better than in Ati and Nvidia GPUs. Nvidia and Ati's architecture are like twins compared to what Larrabee will be.


----------



## TheLaughingMan (Sep 27, 2009)

You know.  I am getting sick of these random pissing contest between those 3.  Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).

Personal favorite pissing contest moment.  Intel and Nvidia opening arguing about Integrated Graphic "solutions".  Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.


----------



## Benetanegia (Sep 27, 2009)

TheLaughingMan said:


> You know.  I am getting sick of these random pissing contest between those 3.  Granted it is funny at times especially when Intel and Nvidia ignore AMD while AMD is steadily pulling itself up by the boot straps (to coin a phrase).
> 
> Personal favorite pissing contest moment.  Intel and Nvidia opening arguing about Integrated Graphic "solutions".  Then AMD quietly releasing the 780G, which at the time made all other IGP look pathetic.



Yeah, I'm sick too, but reality is reality and the reality is that there's no interest free physics developer with a solution capable of fighting with those two. Before Intel and Nvidia bought them, both Havok and Ageia bought almost every serious competitor, including the one that was best IMO, Meqon physics (bought by Ageia). Those guys were offering things similar to Euphoria back in 1999. Duke Nukem Forever was going to use it for a level of physics and interactivity never seen before. Purportedly much much better than Half-Life 2.

I personally want much better physics in games, the kind of physics that only hardware accelerated physics can offer and I want them the sooner the better. It's not the fisrt time I say this. That's why I always supported PhysX, because they were the only ones offering a revolution and they were offering it now (erm back then ). Honestly, even today, I don't care what OpenCL or DX11 will offer in that regards, because whatever they offer, even if it's 10 times better and 10 times easier for developers, it won't happen until late 2010 the sooner. That doesn't mean that I don't support them, but from a distance, because as things are now, supporting them means burying PhysX, and I want something until OpenCL and DX11 accelerated solutions are ready. They are nothing more than a paper launch today.


----------



## EastCoasthandle (Sep 27, 2009)

Interesting OP, I wonder if that is happening in other cpu physx games like Batman (retail) or Shift?  From that author's POV he calls it flat out sabotage.  And it's interesting, if you reduce the number of threads you can reduce the frame rate.


----------



## shevanel (Sep 27, 2009)

check out the non gpu accelerated physics in Red Faction:G , by havok i believe.

I was so unimpressed with the physics in BM:AA..

GRID has all of the things BM:AA (smoke, flags breakables) and BM:AA did not have breakable objects... (why would anyone want to run the suggested 9800gtx for physics... so useless)


----------



## newtekie1 (Sep 27, 2009)

PhysX is emulated on CUDA, which is trying to then be emulated on CPU code.  That isn't exactly efficient, what did you all expect?

PhysX is inherently NOT multi-threaded.  It is designed to run on a single PPU.  Why would you expect it to suddenly become multi-threaded when ran on a CPU?


----------



## shevanel (Sep 27, 2009)

counterstrike source is old. i know.. but I remember the very first time I ever played it coming from 1.6 and I was amazed at how the barrels could be knocked over.. debris could be kicked. rag doll deaths...

Its been a long time since Ive played a game that had such a great physics "feel"

Even half-life 2.. the first time i played and shot one of those guards.. it felt so realistic. We never needed old rebranded cards to do the job next to our main gpu. I know its apples ornages comparing source to todays games, but Im just saying..


----------



## newtekie1 (Sep 27, 2009)

Saying what?  That the physics in old games don't even come close to current games?


----------



## shevanel (Sep 27, 2009)

Which games are you reffering to that have such great physics? 

Or have I just gotten spoiled?


----------



## erocker (Sep 27, 2009)

You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.


----------



## Steevo (Sep 27, 2009)

erocker said:


> You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.



They did, it's called DX, and Nvidia ignored the evolution of it years ago, and ATI and MS are holding hands like school kids. 


Now DX11 will allow the direct implementation of physics calculations and implementation on the X brand GPU. However I believe this is just the start of the next large idea, doing all complex rendering on the GPU, movie, audio, pictures, etc.... everything but the basic functions passd on to the faster processor, and expansive capabilaties built into the software.


----------



## FordGT90Concept (Sep 27, 2009)

This might have already been said: Havok is a wholly owned subsidiary of Intel and may be part of Intel's motivation to create Larrabee.


----------



## shevanel (Sep 27, 2009)

Steevo said:


> They did, it's called DX, and Nvidia ignored the evolution of it years ago, and ATI and MS are holding hands like school kids.
> 
> 
> Now DX11 will allow the direct implementation of physics calculations and implementation on the X brand GPU. However I believe this is just the start of the next large idea, doing all complex rendering on the GPU, movie, audio, pictures, etc.... everything but the basic functions passd on to the faster processor, and expansive capabilaties built into the software.





FordGT90Concept said:


> This might have already been said: Havok is a wholly owned subsidiary of Intel and may be part of Intel's motivation to create Larrabee.



Couldnt have said it better.


----------



## Sonido (Sep 27, 2009)

erocker said:


> I thought that it was ATi that refused to incorporate any type or PhysX for their GPU's? Either way with DirectX 11 it shouldn't matter. Unless of course Nvidia finds another way to keep PhysX proprietary to themselves while using DX11.



Well, it's complicated. They won't do it themselves, but they aren't against someone else doing it. Meaning, AMD won't, but if you can, great!


----------



## shevanel (Sep 27, 2009)

With this new era of gaming and gpu's in general.. and DX11 and faster CPU/GPUS's I think we are in for a treat in the world of gaming. I was at bestbuy tonight looking for a game to play.. I left empty handed.. nothing looks interesting enough to pay $20-50 for right now..


----------



## Benetanegia (Sep 27, 2009)

Steevo said:


> They did, it's called DX, and Nvidia ignored the evolution of it years ago, and ATI and *MS are holding hands like school kids.*



I couldn't agree more on what's in bold, but the rest is not tre. Not wanting to implement all of the features that were unilaterally developed by MS and Ati doesn't mean they ignored the evolution of DX. Nvidia cards were 99% DX10.1 compliant, but since the introduction of DX10 MS decided that you need the 100% to call your card compliant. Apart from being 99% DX10.1 Nvidia cards were 100% or 110% compliant with what game developers wanted, which is what matter in the end. Basically Nvidia decided to depart a bit from DX, because DX had departed from what game developers really wanted.

DX10.1 was what MS and Ati decided DX10 had to be before Nvidia or game developers had the opportunity to say something and was a direct evolution of XB360's API and thus a joint venture between MS and Ati on their own. Hence it had to be changed afterwards to fit what Nvidia and game developers wanted (of the three companies Nvidia is the one closest to game develpers, that's something not even an Ati fanboy can deny). MS has always done DX unilaterally deciding what to implement and what not, and more importantly how, with very little feedback from game developers. It's because of that that most big game developers in the past prefered OpenGL (iD, Epic, Valve...). That sentiment has not changed in recent years really, but game developers had no option but to resign and play by MS rules, because OpenGL project was almost dead until Khronos tried to revive it (without much success BTW).


----------



## Steevo (Sep 27, 2009)

Benetanegia said:


> I couldn't agree more on what's in bold, but the rest is not tre. Not wanting to implement all of the features that were unilaterally developed by MS and Ati doesn't mean they ignored the evolution of DX. Nvidia cards were 99% DX10.1 compliant, but since the introduction of DX10 MS decided that you need the 100% to call your card compliant. Apart from being 99% DX10.1 Nvidia cards were 100% or 110% compliant with what game developers wanted, which is what matter in the end. Basically Nvidia decided to depart a bit from DX, because DX had departed from what game developers really wanted.
> 
> DX10.1 was what MS and Ati decided DX10 had to be before Nvidia or game developers had the opportunity to say something and was a direct evolution of XB360's API and thus a joint venture between MS and Ati on their own. Hence it had to be changed afterwards to fit what Nvidia and game developers wanted (of the three companies Nvidia is the one closest to game develpers, that's something not even an Ati fanboy can deny). MS has always done DX unilaterally deciding what to implement and what not, and more importantly how, with very little feedback from game developers. It's because of that that most big game developers in the past prefered OpenGL (iD, Epic, Valve...). That sentiment has not changed in recent years really, but game developers had no option but to resign and play by MS rules, because OpenGL project was almost dead until Khronos tried to revive it (without much success BTW).



Eat teh green pill, its only 1% poison.


Nvidia ignored DX years ago, and is either doing it again now, by their refusal to do anything with what DX10.1 brought to the table and either are ignoring DX11 for thei own selfish gains, or have nothign to bring and only want the competition to fail. I lean twords the latter.

As far as the game developers, how many games are made for the 360? lots. They are easy to port, have advanced functionality compared to the lame duck PS3 offering, DX is what game developers want, and thus we have DX11, and i'm sure alot of other enhancements that are going to be wanted for the next installment, if developers really wanted more or something different, they would use Open GL alot more than they do now, and coding for the DX11 API is much easier as it is the GPU's job to decode the high level language and run it. So how is something with wide support, a set in stone standard for coding, thousands of features, and a huge userbase, multi-platform ready, and waiting hardware bad? Not to mention it is free.....you just have to download it, the Beta is available and open to the public, so there is time to get ready. 


Unless you have nothing, and only want to throw shit, it isn't.


----------



## shevanel (Sep 27, 2009)

Several threads about this topic which is always off topic to OP.. once this topic is touched the thread dies quick.


----------



## Benetanegia (Sep 27, 2009)

Steevo said:


> Eat teh green pill, its only 1% poison.
> 
> 
> Nvidia ignored DX years ago, and is either doing it again now, by their refusal to do anything with what DX10.1 brought to the table and either are ignoring DX11 for thei own selfish gains, or have nothign to bring and only want the competition to fail. I lean twords the latter.
> ...



None of them, Nvidia nor Ati followed MS ways strictly until DX10 so you are completely wrong. It's not only a matter of implementing the features, it's a matter of *how* those features are implemented. Before DX10 Nvidia and Ati *always* made their own aproaches to some of the features and those features were eventually implemented in DX months if not years after those had been implemented in hardware by any of the GPU companies. That's how DX8.1, and DX9.0 a, b and c were born, just to name a few. For many developers that was a far better approach and you can read blogs from people like Carmack, Tim Sweeny and the likes if you don't believe me and want to learn a thing or two in the meantime. Why did they prefer that approach? Because it offered much better performance and much more control over how things worked in every architecture (jack of all trades, master of none they say). Of course the DX10 approach is better for lazy developers that don't care shit about optimization, but that's all. Most important developers make a different rendering path for the different architectures anyway, so they don't care as much if they had to make things differently for one or another. In fact because of the closed nature of DX10, many developers have to deal more with HLSL and assembly in order to better fit their tech to the various architectures. DX10 is to graphics as Java is to general programming. Java is supposed to be able to run the same code in every platform, but the reality is that deveoplers create different code for different cell phones, for example. Otherwise they wouldn't be able to be competitive in all of them.

Because of how strict DX10 is hardware can only be in one way, which doesn't mean it's going to be better. Ati or Nvidia could come up with a better way of doing graphics (along with game developers), but they will never be able to do that because of DX's closed nature. That *in no way* helps first tier developers, because they will not be able to access any new features or ways of doing those features until MS decides is the time to implement that on DX12. Take tesselation for example, the way that MS wants it done is different from what Ati wanted it to be and still very very different from what a perfect design would be, but even if Ati/Nvidia/Intel come up with a better design in the future they will not be able to do it that way.

When it comes to DX10.1 features Nvidia decided to use their own aproach to many things like reading from back buffer and cube map arrays, but they couldn't say their cards were DX10.1 because they were not doing them the way that MS had specified. If t had been DX9, not only Nvidia would have been compliant with DX9, but the development of a new DX9.0d would have started to include that way of doing things. In the end MS's DirectX API is nothing more than an interface and brigde, between software developers and hardware developers, it's those two who have or should have the last word and not MS. If you want a comparison with cars, is as if M$ spcified that all engines had to be in V shape, V6, V8, V12... That is good and all, and most sportive cars use that, but then here it comes Porsche and say that for their cars the Boxer (confronted cylinders) is much better. They would be way out of the standard, but the reality is that Porsche 911's with their 6-cylinder Boxer engines are much better than most V8s out there and certainly better cars than any V6. Boxer engines migth not fit every car, but they certainly make Porsche's the best supercars for the money.

Now regarding XBox 360, again, in no way the fact that a game can be ported to PC straight from the Xbox360 is a good thing. It surely is better for lazy developers that don't care to port a game to a platform that is 4 times more powerful and that will not get anything in return because of how poorly optimized it is. It certainly isn't for us who have been obtaining subpar console ports in the last years.

And last about OpenGL. Develpers don't use OpenGL because it didn't evolve with the years the project was almost abandoned and before that things were not going well inside the forum, because there was to many fragmentation about how things had to be done. That's the absolute oposite of what MS did with DX10, where they said things are going to be done this way, period. Well none of the aproaches are good. Another reason OGL was abandoned in favor of DX, amd aybe the most important one, is that most developers had to use DirectInput and DirectSound for compatibility reasons anyway, even if they were using OpenGL for the graphics, so it just became easier to only use one of the APIs.

I'm not a fanboy, so you can take this as it is, an statement of how things trully have happened or you can swalllow another of your own red pills, which is obvious you already did.


----------



## Steevo (Sep 27, 2009)

Benetanegia said:


> I'm not a fanboy



http://forums.techpowerup.com/search.php?searchid=9140235


Cool, just so we are on the same page.


----------



## dr emulator (madmax) (Sep 27, 2009)

i'm only interested in knowing if i can play pong using it
now where's my


----------



## Benetanegia (Sep 27, 2009)

Steevo said:


> http://forums.techpowerup.com/search.php?searchid=9140235
> 
> 
> Cool, just so we are on the same page.



Resorting to jokes and name calling means you have nothing say. No arguments on your favor. Well. I won't say I didn't expect that.

On topic, I have done the same on L4D and Fallout 3, 2 games using Havok and only one core is being used, so no difference. I have tried Mirror's Edge and it always uses 2 cores, 50% load, no matter if PhysX was enabled, disabled in game or disabled in control panel, but enabled in game. The performance difference between the modes was notable though.










Mirror's Edge doesn't have windowed mode option so I only post the task manager, if someone knows how to enable windowed mode in Mirror's edge please tell me and I will redo. The level is the next one after you talk to your sister, it's perfect because soon after you start some soldiers shoot many cristals out, stoped soon after that shootout:

EDIT: here http://www.youtube.com/watch?v=V_B3_upOvmc - the one that starts at 0:50.

With PhysX disabled in control panel and GPU accelerated PhysX enabled in game, didn't run FRAPS but fps were below 10 for sure.






This one is with PhysX completely enabled.





^^ NOTE: Even if it says 29% the actual average usage was around 40-45%, a little bit less than when PhysX acceleration was disabled (50-55%), but negligible. I posted the SS so that the graphs can be seen as those tell the story better. When PhysX is enabled there's more variance (and spikes) in CPU usage, between different cores. That 29% also confirms the variance, when physx was disabled it never came lower than 50%.

So that pretty much says it all. It depends on the developer how many cores are used.


----------



## Steevo (Sep 27, 2009)

I called you a name? You said you were not a fanboy, and I just searched your posts, most being anti-ATI, and pro Nvidia. If that isn't then what is?


As to your screen shots, a game on pause doesn't use physics, and if you use any one of the many available apps you can watch your in game GPU, CPU, memory, Vmem, temps and much else. 


The article in question is merely showing that a system with a faster processor can run a game with Physics enabled on the CPU, but due to Nvidia having their hands in developers the consumer suffers unless you want to purchase theirs, and only their propretary hardware and run their proprietary software.


http://www.ngohq.com/graphic-cards/16223-nvidia-disables-physx-when-ati-card-is-present.html



Nvidia disables PhysX when ATI card is present 

--------------------------------------------------------------------------------

Well for all those who have have used Nvidia cards for PhysX and ATI cards to render graphics in Windows 7...All that is about to change.

Since the release of 186 graphics drivers Nvidia has decided to disable PhysX anytime a Non-Nvidia GPU is even present in the same PC. Nvidia again has shot themselves in the foot here and showed they are not customer oriented. Since they are pushing Physx this will not win over any ATI fanboys with this latest decision. 

Here is a copy of the email I received from Nvidia support confirming what they have done.

"Hello JC,

Ill explain why this function was disabled.

Physx is an open software standard any company *can freely develop hardware or software that supports it**. Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs. I'm sorry for any inconvenience caused but I hope you can understand.

Best Regards,
Troy
NVIDIA Customer Care"

------------------------------------
*So as long as you have their card, you can run any of theri open source hardware or software, on their card. No any other card, just theirs, being open source and all, for all hardware, that is theirs, and only theirs, since it is still open to all, and everyone who has their software, and hardware, of course.


Enabling PhysX in game creates MORE demand on system components, not less.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=8862&Itemid=1


----------



## Benetanegia (Sep 27, 2009)

Steevo said:


> I called you a name? You said you were not a fanboy, and I just searched your posts, most being anti-ATI, and pro Nvidia. If that isn't then what is?
> 
> 
> As to your screen shots, a game on pause doesn't use physics, and if you use any one of the many available apps you can watch your in game GPU, CPU, memory, Vmem, temps and much else.
> ...



Nvidia offered Ati to run PhysX on their cards. They said no, end of story. You can't run two drivers in Vista, that's why they decided to not support Ati+Nvidia for PhysX.

EDIT: The games were not paused. 

About PhysX requiring more CPU when the GPU is being used for PhysX instead of Ageias' PPU, there's no doubt about that, but it uses* much less CPU *than if the CPU had to do the physics. You proved nothing, mate. BTW the reason that the ageia PPU used less CPU is because the PPU had a CPU incorporated, it was a CPU + parallel procesors, it could run ALL the physics code in the PPU, including branching. GPUs can't do branching (yet) and hence it requires some CPU, but much less than if the CPU was doing the physics.

About fanboys and not fanboys you need a reality check. Here in these forums many people directly bash Nvidia, and no one says nothing, those people are never fanboys. But if someone that looks things from all the angles even dares to explain why Nvidia shouldn't be bashed for that, providing all the reasons behind his statement... he is a fanboy. Honestly I even doubt any of you have ever read my arguments and just keep saying the same things and resorting to jokes and whatnot. You give no reasons and links that have nothing to do with the topic at hand. You don't comment about the results that I just posted, because, frankly, they do no good to your POV. You are a fanboy and I am an anti fanboy, that's why I will always jump in when a fanboy bashes one company. It just happens that in these forums that's always Nvidia. Check these forums better and you will see how Ati is never bashed like Nvidia is. In real life I always side with the one that is having an unjustified beating too, that even led me to a fine, because I tried to help a guy who was being hit by some guards at the disco and the guards were friends with the police patrol that was in charge that day. So I got hit by the guards, later by the police and then I got a fine. Nice eh? But that's how I am. So as I said, if I see someone bashing a company or technology for something it doesn't deserve I will always jump in. I just can't stand the fanboy's mindless lack of reasoning and this thread is a good example of that lack of reasoning. I mean almost no game is multi-threaded (and I proved it somehow above), but one game is not multi-treaded AND uses PhysX and "OMFG, Nvidia cheating, Nvidia paying developers so that they don't make their games multi-threaded". It's absurd, and the fact that you seem unable to see that after all the reasons I gave for the games not usng more than one core just shows how willingly believe in that sensence.


----------



## EastCoasthandle (Sep 28, 2009)

erocker said:


> You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.



Can't argue with that.   This is how consumers wins in the long run.


----------



## MrMilli (Sep 28, 2009)

Benetanegia said:


> On topic, I have done the same on L4D and Fallout 3, 2 games using Havok and only one core is being used, so no difference. I have tried Mirror's Edge and it always uses 2 cores, 50% load, no matter if PhysX was enabled, disabled in game or disabled in control panel, but enabled in game. The performance difference between the modes was notable though.



Did you enable 'Multicore rendering' in L4D?
I'm asking this because when they introduced MR in TF2, the explosions ran much better than before. But it's possible that Valve just split off the physics processing to a seperate core (and not really thread the physics).
What bugs me about PhysX is that the PPU and GPU version are already heavily threaded.


----------



## Benetanegia (Sep 28, 2009)

Wolfenstein another Havok game.






Crysis Warhead.






NFS:Shift






One uses Havok, the other one uses their propietary physics engine, the last one uses CPU PhysX, none of them use more than one core. Of all the games that I have installed right now only Mirror's Edge, a PhysX game, used 2 cores. So what's the reasoning behind that?

As I explained in my first post in this thread. Game developers have to gather to the biggest audience possible, so making their games use more than one core would cripple their user base dramatically or they would have to make two or more different versions to run on different number of cores and probably target performance as a slow Quad could probably be worse than a fast dualie. They simply cannot use 4-8 cores because only a very small portion of the people have Quads and better. It's their call if they want to use two cores or one core depending on their targeted audience. I think that I have provided enough samples of new games to show that most games are single threaded no matter which API they use.

Physics is an added feature on top of that version of the game that is meant to run everywhere. That way is easy for them to implement it. The GPU accelerated PhysX version is also meant to run in almost any Nvidia GPU, it's very basic compared to what newer cards would be capable off. I have a 8800GT in a Quad (this PC) and a 9800GTX+ on a Athlon X2 and both systems all PhysX enabled games perfectly, doing PhysX and rendering (1600x1200 4xAA) smoothly. That's how developers work, very few of them want to risk losing user base because they asked for too high requirements. Hence they alway develop for the least common denominator: 1 core.


----------



## erocker (Sep 28, 2009)

This is with multicore enabled in L4D


----------



## Benetanegia (Sep 28, 2009)

MrMilli said:


> Did you enable 'Multicore rendering' in L4D?
> I'm asking this because when they introduced MR in TF2, the explosions ran much better than before. But it's possible that Valve just split off the physics processing to a seperate core (and not really thread the physics).
> What bugs me about PhysX is that the PPU and GPU version are already heavily threaded.



L4D, no I don't use MR, because for some reason it sttuters heavily on my PC. I have better overall fps with one core anyway. Besides that's rendering, so I think it's better to have it that way. If the game used more than one core, then it would be because of the physics. If I had both enabled how would we know if the CPU usage comes from the rendering or the physics?

The problem is not in making the engine multi-threaded. The problem is that most PCs have not so many cores so you would have to make a lot of versions. If you make your engine use 8 threads, but in your game you only put enough objects to use 1 of them, so that the game can run everywhere, that's pointless. If you put scenery to load all 8 cores, anything slower will be incapable of running that game. So with PhysX they decided that instead of doing a basic (for all) 1 core version and then one with 4 or 8 cores. They were going to make one with 1 core and one with the GPU that is comparable to a 20 core CPU *and* still use only 1 CPU core so that someone with a slow CPU, but a decent GPU (9600GT, 8600GT?) can take advantage of that.

Having 4x the power will not enable much better physics than what a single core can do anyway. Instead of the usual < 20 physically enabled objects (count them if you dont believe me), it would enable the use of 80, but that won't make any difference, a single glass in Mirror's Edge splatters in 4x that ammount of pieces, and still look like they are not enough.



erocker said:


> This is with multicore enabled in L4D
> 
> http://i403.photobucket.com/albums/pp112/erocker414/l4dcpu.jpg



So it uses 2 cores, and I suppose you get better framerates? That's what I have heard althought it doesn't work for me.


----------



## Steevo (Sep 28, 2009)

Here is a random screen shot!!!!

It will prove a point. I'm not sure what, but it will.


----------



## Benetanegia (Sep 28, 2009)

Steevo said:


> Here is a random screen shot!!!!
> 
> It will prove a point. I'm not sure what, but it will.



Is that sarcasm? Again... :shadedshu

If it is, maybe you can explain me what makes the screenshot in the OP any less random and useless? (Not deprecating the OP, making a point.)

And if it's not sarcasm, I suppose it could demostrate any of these two posibilities:

1- NFS:Shit (edit: <-- that was accidental I swear, but... I think I'll leave it that way) uses PhysX and when runnig on a Radeon card uses two cores, even though it only uses one with a Nvidia card. Absolutely destroying what is being suggested in this thread.

2- It uses 2 cores under Win7, but only one on XP. I'll take the effort of installing the game in my Win 7 partition tomorrow and see if this one is a posibility.


----------



## MrMilli (Sep 28, 2009)

erocker said:


> This is with multicore enabled in L4D
> 
> http://i403.photobucket.com/albums/pp112/erocker414/l4dcpu.jpg



Well that's what i thought. MR spawns four threads for your quad core.
Actually Valve is making a big push towards MR. The Valve particle simulation benchmark is one example of their efforts. But since Valve uses a modified Havok engine, it's going to be hard to compare it to other Havok based games.


----------



## Benetanegia (Sep 28, 2009)

MrMilli said:


> Well that's what i thought. MR spawns four threads for your quad core.
> Actually Valve is making a big push towards MR. The Valve particle simulation benchmark is one example of their efforts. But since Valve uses a modified Havok engine, it's going to be hard to compare it to other Havok based games.



But IMHO Rendering != physics, nothing in the meaning of rendering includes physics, that added CPU usage is because of the improved rendering and not from physics calculations IMO. There's very little physics in L4D anyway, they wouldn't max out a core.


----------



## MrMilli (Sep 28, 2009)

Benetanegia said:


> But IMHO Rendering != physics, nothing in the meaning of rendering includes physics, that added CPU usage is because of the improved rendering and not from physics calculations IMO. There's very little physics in L4D anyway, they wouldn't max out a core.



Rendering is happening on the GPU, not the CPU. 
Of Course having four threads doesn't mean two of them are for physics but that's the most feasible obviously. There is AI, sound processing, game mechanics, ... too.
Actually L4D contains a lot of physics (and AI & Director)! All characters have complete interaction with the surrounding (movable objects, line of sight, collision detection, ...). Especially when a +100 horde rushes at you.

This shows the difference between single and dual core pretty good:





The difference between the 4000+ and the 5000+ is huge.


----------



## Benetanegia (Sep 28, 2009)

MrMilli said:


> Rendering is happening on the GPU, not the CPU.
> Of Course having four threads doesn't mean two of them are for physics but that's the most feasible obviously. There is AI, sound processing, game mechanics, ... too.
> Actually L4D contains a lot of physics (and AI & Director)! All characters have complete interaction with the surrounding (movable objects, line of sight, collision detection, ...). Especially when a +100 horde rushes at you.
> 
> ...



Man a lot of rendering process happens in the CPU. The graphics cards need to be fed by the CPU, between other things, and a faster CPU almost always gives better fps.

I don't think that zombies collide with each other, which would add a lot of processing, so it's not a lot of physics going on there IMO. I've just played to be sure about this and they do not collide. Collisions in Source are based in the typical hit-box squeme anyway, it's fairly simple. I don't know if there is any slowdown when the zombie hordes, because it never goes below 60fps. But the fact that it always remains above 60fps with a single core tells it all anyway.


----------



## EastCoasthandle (Sep 28, 2009)

It is my understanding that the use of physx (as an API) can be seen more so from I/O activity more so then just CPU activity.  Has anyone looked at it to see?


----------



## Benetanegia (Sep 28, 2009)

EastCoasthandle said:


> It is my understanding that the use of physx (as an API) can be seen more so from I/O activity more so then just CPU activity.  Has anyone looked at it to see?



It depends. If the calculations are made in the CPU it will be more than I/O, obviously, but when the GPU is being used for calculations, then probably yes, although the branching work is still there. Anyway, I have always understood that the CPU activity is based on the load in the instruction decoder and not the ALUs.

And ok, I have just tried Batman and I guess it uses as many cores as it needs in my PC, because I've seen 60%+ loads when PhysX extensions were enabled in the game running from the CPU, PhysX acceleration in CP off. It was not playable. Here:







It's curious, because maximum CPU load happened when close to the smoke, but even then the fps's are better than when you move in any other place. Apparently the coat eats up more resources from the GPU.

Around 50% load when PhysX acceleration was on and smooth gameplay on my 8800GT (30 low -60 high). Everything maxed out, including physics. Certainly the recommended GTX260 + 9800GTX+ dedicated to PhysX is a marketing joke.


----------



## shevanel (Sep 28, 2009)

> Certainly the recommended GTX260 + 9800GTX+ dedicated to PhysX is a marketing joke.



Finally someone else that agrees.


----------



## MrMilli (Sep 28, 2009)

Benetanegia said:


> Man a lot of rendering process happens in the CPU. The graphics cards need to be fed by the CPU, between other things, and a faster CPU almost always gives better fps.
> 
> I don't think that zombies collide with each other, which would add a lot of processing, so it's not a lot of physics going on there IMO. I've just played to be sure about this and they do not collide. Collisions in Source are based in the typical hit-box squeme anyway, it's fairly simple. I don't know if there is any slowdown when the zombie hordes, because it never goes below 60fps. But the fact that it always remains above 60fps with a single core tells it all anyway.
> 
> It's curious, because maximum CPU load happened when close to the smoke, but even then the fps's are better than when you move in any other place. Apparently the coat eats up more resources from the GPU.



You really don't need to take me to school. I know how rendering in DirectX is done. 
While a lot of the rendering used to be done on the cpu in the pre-DX8 era, with each new DX iteration less and less work is done by the cpu for the actual rendering (and less cpu overhead).

While I have no doubt that a single core can run +60 fps for most of the time in L4D, i highly doubt that it can maintain that during a final battle (or you must be running a Core2 close to 4Ghz).

About the smoke, these days that's done through particle effects which runs on the physics engine. Coat simulation is done by the physics engine too (unlike older games where such a thing was pre-calculated).

This is a nice read for you guys:
http://arstechnica.com/gaming/news/2006/11/valve-multicore.ars
Basically, they need multi-threading for physics and AI.


----------



## Benetanegia (Sep 28, 2009)

MrMilli said:


> You really don't need to take me to school. I know how rendering in DirectX is done.
> While a lot of the rendering used to be done on the cpu in the pre-DX8 era, with each new DX iteration less and less work is done by the cpu for the actual rendering (and less cpu overhead).
> 
> While I have no doubt that a single core can run +60 fps for most of the time in L4D, i highly doubt that it can maintain that during a final battle (or you must be running a Core2 close to 4Ghz).
> ...



Who's taking who to school now? 

Seriously, we both know what we both are talking about, we have just a different view of the weight of each task within the pipeline. After reading your link I'm even more convinced of my PÔV regarding L4D and the topic at hand, physics. Rendering and AI (mainly pathfinding), heavily outweighs physics processing in that game, so IMO any performance increment from adding multiple threads, comes from the expanded rendering and AI capabilities and much less from physics processing.

Another thing is that physics calculations (sound, AI) are not part of the rendering pipeline, at least in most games. That's why I didn't consider that multi-threaded rendering meant the use of more than one core. I thought the game was multi-threaded (having physics, AI in other threads), but when MR was enabled the rendering tasks where split into more threads too. What they are calling multi-threaded rendering, I would call Multi-threaded execution or something like that.


----------



## MrMilli (Sep 28, 2009)

cheers


----------



## FordGT90Concept (Sep 28, 2009)

What gets me is that the CPUs are advancing just as fast as GPUs.  Why move tasks off the CPU just to leave the CPU bored and wasting power?  Optimizing a game means taking a big workload and finding ways to make it consume much less.  If it takes PhysX 1 billion cycles to do something Havok can emulate with 1 million cycles, why not use the option that is 1000x more efficient?  Few would be the wiser.

Oh, and Zombie AI is pretty simple.  Most of the work in a game like L4D comes from none other than animations.  When you have a horde of zombies running at you, that's a crapload of triangles to update in a frame.  Models/character animations are a joint CPU and GPU task.  GPU takes care of most of the visuals while the CPU handles synchronizing the visuals with the backend (AI, position, etc.).  The GPU obviously bares the majority of the load because the CPU has much more complex things to be concerned with (like collision detection).


----------



## Benetanegia (Sep 28, 2009)

FordGT90Concept said:


> What gets me is that the CPUs are advancing just as fast as GPUs.  Why move tasks off the CPU just to leave the CPU bored and wasting power?



First of all, because CPUs have not advanced as fast as GPUs. Also GPUs have much greater floating point power to boot and are parallel by default. They are much better suited for physics. Anyway, PhysX when GPU accelerated doesn't use much less CPU, the idea is that using all CPU power available, you too use all the GPU power available to make 20x more physics calculations.



> Optimizing a game means taking a big workload and finding ways to make it consume much less.  If it takes PhysX 1 billion cycles to do something Havok can emulate with 1 million cycles, why not use the option that is 1000x more efficient?  Few would be the wiser.



PhysX doesn't require any more power than Havok.



> Oh, and Zombie AI is pretty simple.  Most of the work in a game like Zombie comes from none other than animations.  When you have a horde of zombies running at you, that's a crapload of triangles to update in a frame.  Models/character animations are a joint CPU and GPU task.  GPU takes care of most of the visuals while the CPU handles synchronizing the visuals with the backend (AI, position, etc.).  The GPU obviously bares the majority of the load because the CPU has much more complex things to be concerned with (like collision detection).



Path finding >> collision detection. At least if the game has been properly coded. Position of objects/characters is always active, for obvious reasons. Path finding must be active always, as long as the AI enters the players zone (to be specified by the dev.). Collision detection only needs to kick in when two objects are close and most of the times only if the object/character is onscreen.

The rest you said is correct which accounts for my original claim:

In games like L4D, physics <<<<<<<<< everything else.


----------



## dr emulator (madmax) (Sep 29, 2009)

Steevo said:


> Here is a random screen shot!!!!
> 
> It will prove a point. I'm not sure what, but it will.



ye thanks Steevo it reminded me that gpu-z is out 
available here


----------



## pantherx12 (Sep 29, 2009)

Play the game without phsyx, notice physics still happen!

Having it off means the calculations run on the CPU. Simples!

I've always been happy with the phsyics in games just off the CPU.


----------



## FordGT90Concept (Sep 29, 2009)

Benetanegia said:


> First of all, because CPUs have not advanced as fast as GPUs. Also GPUs have much greater floating point power to boot and are parallel by default. They are much better suited for physics. Anyway, PhysX when GPU accelerated doesn't use much less CPU, the idea is that using all CPU power available, you too use all the GPU power available to make 20x more physics calculations.


The GPU is already burdened with millions of triangles while the CPU is at < 50% load.  Why add a physics workload to a device that is already strained?




Benetanegia said:


> PhysX doesn't require any more power than Havok.


Then there's no reason to use PhysX at all.




Benetanegia said:


> Path finding >> collision detection. At least if the game has been properly coded. Position of objects/characters is always active, for obvious reasons. Path finding must be active always, as long as the AI enters the players zone (to be specified by the dev.). Collision detection only needs to kick in when two objects are close and most of the times only if the object/character is onscreen.
> 
> The rest you said is correct which accounts for my original claim:
> 
> In games like L4D, physics <<<<<<<<< everything else.


Collision detection is what keeps you, all movable entities, and perhaps AI characters, from falling out of the play area.  The number of calculations climb when one entity (player or object) impacts another.  Collision detection is a very simple greater than/less than check that happens frequently, it's what happens after a collision (entity velocity transferred to another entity) has been detected that can exponentially increase the workload.


Yes, L4D is a bad platform for the benchmarking of physics.  Red Faction: Guerrilla would have been a better choice.


----------



## Benetanegia (Sep 29, 2009)

FordGT90Concept said:


> The GPU is already burdened with millions of triangles while the CPU is at < 50% load.  Why add a physics workload to a device that is already strained?



The GPU is not strained at all. Most powerful cards render the games at 100fps when 60 is the maximum that you really need. But that point of view is pointless, GPUs are not using all of their SPs when rendering so it makes sense to use them for something. MIMD will make that even easier. Also Core i7 running at 4Ghz doesn't reach 100 GFLOPS, current gen graphics cards have almost 3000 GFlops. You could use 2500 for rendering and 500 for phyisics and you would never notice it.



> Then there's no reason to use PhysX at all.



Whaaat? That makes no sense. Havok and PhysX use almost the same CPU resources to offer the same funtionality when doing physics in the CPU, how is there's no need for PhysX then, based on that? There's no need for Havok either? As I said it makes no sense.


----------



## RejZoR (Sep 29, 2009)

What good is 3000gflops if all they ever do is bunch of stupid debris?! Ever seen Havok in action?

LINK:
http://www.havok.com/index.php?page=showcase

They don't mention they do it on GPU, so i assume it's done on CPU. Yet, i can see ALL the effects done with Havok + many i've never seen with PhysX. Especially lame is the lack of simple stuff as flags in PhysX games when using CPU, where Havok simulates like thousands of flags at once. Interior destruction also looks impressive. Or the cloth simulation.

But noooo, NVIDIA wants their crappy exclusive physics no one wants except them. I know many GeForce users who said they are cool when seen for the first time, but physics become lame and boring very fast.
Only way to really evolve physics is to make an open standard. Otherwise developers have no intention to waste time implementing it just so only half of the users will be able to use it.
And those who do, are just bribed by NVIDIA to do it. Nothing else.
Ppl at NVIDIA obviously don't seem to understand the need for physics to evolve.

We'we come a long way with graphics in 10 years from cartoonish to near photo realistic.
But what has happened with physics in 10 years? We've come from realistic ragdolls (most of games), debris (Red Faction, Max Payne 2, Half-Life 2), destructive environments (Red Faction - 2001! and partially Max Payne 20Half-Life 2) and evolved gameplay and puzzles that rely on physics (Half-Life 2 and partially Max Payne 2) to ragdolls and fuckin debris only !?!?!?!? Where is everything else!?
What the hell!? We are going backwards? Physics haven't evolved. They have devolved.
If you don't believe me, just look at the games and when it started happening.
Max Payne 2 was insane as far as overal physics are involved. Ragdolls add insane cinematic feeling, small objects like cans, tires, gas cans, bottles, walls destruction (scripted but when they went down, they did this with realistic physics). Everything flies around in firefights and and when you slow down it looks even far more dramatic. You shoot a guy with shotgun and he flies over some table, sweeping glasses and bottles from it in all directions. And best of all, all the physics were done on CPU back in freakin year 2003! But today in late 2009 NVIDIA is feeding as with GPU only bullshit that can only showcase us bunch of useless and boring effects like flags, papers flying around and some smoke. Are you kidding me?! You needed uber powerful GPU and 10 years for that!? I think that's just embarrassing at best, but only ppl at NVIDIA and some fanboys think PhysX is a great thing. I think it's not and it's just damaging the games, physics effects an gameplay.

We could already have physics that would give player a feeling of actually believable world, where you step into it and everything would just feel like a real world. But because of "awesome" PhysX, we still have 99% static world and 1% of useless debris we've seen decade ago.
Way to go NVIDIA. Way to go (are you feeling the sarcasm here?  ).

So spread the word and educate the gamers about the damage PhysX and NVIDIA are doing too the gaming community. It's the only way to stop this crap.


----------



## Benetanegia (Sep 29, 2009)

RejZoR said:


> What good is 3000gflops if all they ever do is bunch of stupid debris?! Ever seen Havok in action?
> 
> LINK:
> http://www.havok.com/index.php?page=showcase
> ...



 Considering that you linked to a page where Havok *Reactor* is being shown, which is the offline physics engine that comes integrated in 3DStudio Max, Maya and the likes, I won't bother reading all your post. What in bold already tells what the tone of the post is and seriously... nevermind.

Reactor takes seconds and sometimes minutes to calculate a single frame. That's what is being shown there and has nothing to do with Havok in games, except that it belongs to the same company. 

Overall, nice try. 

And TBH, I'm really tired of the people saying we don't need PhysX, because we have Havok. That's the most stupid thing I have heard ever. We don't need Ati, because there is Nvidia or viceversa? We don't need AMD, because Intel or viceversa? Have you ever heard of competition? 

I'll say this one more time in big bold letters, maybe this way you guys can understand that simple fact. I don't know if it has been unnoticed until now, but now you will be able to read it easily, if that has been the case: 

*PhysX is an API, just like Havok that can run on the CPU and offer the same functionality that Havok does on the CPU.

It also can run on the GPU offering the possibility of much better physics/graphics effects, something that Havok doesn't (yet).

If you don't want the added detail, you can just disable it, and have the same lacking physics that all the Havok games have (i.e Fallout3, L4D...). Nvidia or developers don't force you to use that option, it's entirely your choice and it is free.

If develpers wanted to use more tham one core, they would. But they don't. As I have demostrated above, most developers only use one core. That's because they have to gather to the highest posible audience and that audience still has single core or dual core CPUs, or a triple core that is weakest than a single core Athlon 64 (Xbox360).*

*Red Faction Guerrilla uses their own propietary physics engine for destructions and especial details, Havok is only used for common physics, the ones you see in Fallout3, etc. Another game that has circulated and has been used for comparing to PhysX is Star Wars: The Force Unleashed, this game also uses a propietary engine based in Havok Euphoria, which is another offline physics engine. Note that I said based on, which means that has been heavily modified. That is always an option, but is almost irrelevant from where you start if you are going to tweak that much. You can even start from scratch like Crytek did, but using a third party engine is more productive. Well there are only two big players and a third smaller one, those are PhysX, Havok and Bullet. Wanting anyone to disapear is like wanting a CPU manufacturer to disapear.It's STUPID.


----------



## SNiiPE_DoGG (Sep 29, 2009)

yo can someone tell me something?

what the HELL is the point of running a physics calculation using GPU power when 99% of new games today are *GPU* limited? meanwhiles our expensive multi-core, multi-thread CPU's are sitting around using 1.75 of their 4-8 possible cores(threads)

yeah thats what I thought... GPU physics is a bad idea in the first place, we need our GPU's to do GRAPHICS PROCESSING not a whole bunch of other crap.


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> yo can someone tell me something?
> 
> what the HELL is the point of running a physics calculation using GPU power when 99% of new games today are *GPU* limited? meanwhiles our expensive multi-core, multi-thread CPU's are sitting around using 1.75 of their 4-8 possible cores(threads)
> 
> yeah thats what I thought... GPU physics is a bad idea in the first place, we need our GPU's to do GRAPHICS PROCESSING not a whole bunch of other crap.



Have you read the thread? NO. 

Ok. I'll make you a resume that explains that:

GPUs have a lot of spare Shader Processing power. Most times when a program says the GPU is at 100% load, it's NOT. Most of the times not even 75% of the Shaders are being used*. The only program so far that uses all of them is Furmark. Modern GPUs, HD5xxx and GT300 can take advantage of that spare processing power, which accounts for 500++ GFlops, to do something. What the hell is wrong with that? Your expensive CPU using all of it's 8 cores can only do 80-90 GFlops. A <<$100 GPU can do orders of magnitude more. No CPU power is much better used to improve the idiotic AI that most games still have for example. Anyway the idea is that by moving all the parrallel work to the GPU, you wouldn't need any expensive CPU. In fact in the near future very few people will need an expensive CPU, wether they are gamers, designers or artists.

* It says 100% load because either all the ROPs or most commonly all the texture units are in use, and since the DX pipeline requires one free unit in every stage in the pipeline it says 100% load.


----------



## SNiiPE_DoGG (Sep 29, 2009)

yes I read the thread... lol

anyway, tell me then why does Physx drop performance of the video card when used?


and saying we will stop using expensive CPU's... lol do you kiss a framed picture of Jenn Hsung when you wake up in the morning?


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> yes I read the thread... lol
> 
> anyway, tell me then why does Physx drop performance of the video card when used?
> and saying we will stop using expensive CPU's... lol do you kiss a framed picture of Jenn Hsung when you wake up in the morning?



Because a lot more detail is being rendered?  Let me reformulate that question:

anyway, tell me then why does Anti-aliasing drop performance of the video card when used.

Regarding the demise of expensive CPUs, first of all, no, I kiss no ones photos and second I use my brain. There are several programs that use GPUs parallel processing right now and all of them are much faster than CPU only programs. Much more are being developed for OpenCL, DX compute, CUDA and Stream. Soon every bit of floating point power in GPUs is going to be used. That means that most common tasks like video encoding, image processing, data sorting and a long list of many others are going to take advantage of 3000 GFlops of power, while the CPU will only have lless than 150 GFlops. Games are going to take advantage of that too for physics, AI, posistional sound... Tell me then for what are you going to use those 150 GFlps of power, when you have 3, 5, or 10 Tflops on the GPU? Web browsing? Chating? Wording? In the most demanding applications, whe re the power is really needed, the difference between an expensive CPU and a cheap one is going to be 3050 GFlops for the cheap $100 one (equivalent to a heavily OCed Core2 Quad) and 3150 Gflps for the expensive $600 one. Wow big difference.

Yeah Intel is going to try to implement parallel computing inside the CPUs to fight that, but that is very stupid, because they will make CPUs very big and expensive and GPUs are going to be more powerful anyway. Adding any sort of parallel units in the CPU is adding innecesary and redundant power (and silicon) in a system that won't need it.


----------



## FordGT90Concept (Sep 29, 2009)

Benetanegia said:


> The GPU is not strained at all. Most powerful cards render the games at 100fps when 60 is the maximum that you really need. But that point of view is pointless, GPUs are not using all of their SPs when rendering so it makes sense to use them for something. MIMD will make that even easier. Also Core i7 running at 4Ghz doesn't reach 100 GFLOPS, current gen graphics cards have almost 3000 GFlops. You could use 2500 for rendering and 500 for phyisics and you would never notice it.


Most powerful is quite irrelevant because few gamers have them.  Only the mid-range cards matter and most of them struggle to hold 30 fps at the high resolutions of common LCDs.  Developing the games for the most powerful hardware of the day could have well lead to the consumer exodus from PCs to consoles--the budget was just too much to justify.

As to the rest of your point, it is imperative every SP be put to work on frames if the fps is not at least 30.

x86/x86-64 processors have SSE which can take complex instructions/tasks and complete them in very few clocks--an advantage GPUs don't have.  The problem is, few SSE instructions are dedicated to games.  Anyway, CPUs are designed to handle complex, multi-faceted tasks while GPUs are limited to simple, linear tasks.

You would notice it if that 500 wasn't enough for physics or that 2500 wasn't enough to get decent framerates.  Physx in most games only take, what? maybe 2 GFLOPS?  Better to put it on the CPU which is far better at multitasking.





Benetanegia said:


> Whaaat? That makes no sense. Havok and PhysX use almost the same CPU resources to offer the same funtionality when doing physics in the CPU, how is there's no need for PhysX then, based on that? There's no need for Havok either? As I said it makes no sense.


How do you know they aren't falling back on a different (Havok, or something similar), CPU-based physics engine when there is not a PhysX enabled device present (for the sake of not killing performance)?  I haven't heard/seen any commentary on what developers thought of PhysX and how they implement it.


Um, and DirectX only includes compute shaders which could accelerate physics calculations; I see nothing that suggests DirectX includes an open standard for calculating physics.  That said, NVIDIA could make PhysX compute shader compatible (which they won't) so that it could run on AMD cards too.  That doesn't mean DirectX 11 will "kill" PhysX or Havok, or any other physics engine out there.  Kind of sad, actually...


----------



## Benetanegia (Sep 29, 2009)

FordGT90Concept said:


> Most powerful is quite irrelevant because few gamers have them.  Only the mid-range cards matter and most of them struggle to hold 30 fps at the high resolutions of common LCDs.  Developing the games for the most powerful hardware of the day could have well lead to the consumer exodus from PCs to consoles--the budget was just too much to justify.



High-end of today, mainstream of tomorrow. You can have a GTX260 for less than $150 today. That will handle any game today. My 8800 GT handles Batman with GPU PhysX on high and 1280x1024 4xAA. Not a high resolution, but the GTX260 being twice as powerful and 1920x1200 having 75% more pixels, it must run the game  at high resolutions better than mine at low resolutions.



> As to the rest of your point, it is imperative every SP be put to work on frames if the fps is not at least 30.



If the game doesn't use more shaders and uses more textures instead or more z, it's not imperative to put more SP to work on frames, it would be pointless. In fact, most games don't use all the SPs, maybe Crysis being the exception with a very big question mark over it. Only Furmark stresses the SPs to the maximum.



> x86/x86-64 processors have SSE which can take complex instructions/tasks and complete them in very few clocks--an advantage GPUs don't have.  The problem is, few SSE instructions are dedicated to games.  Anyway, CPUs are designed to handle complex, multi-faceted tasks while GPUs are limited to simple, linear tasks.



SSE are used a lot in games, at least when SSE3 was released in I don't remember which processor, it improved performance by 5-10%. The processor was the same, except SS3. Most demanding applications are based on repetitive, simple, but heavily parallel tasks. Tell me a task that requires heavy amount of complex calculations, that can't be split into simple ones, as is the case with F@H.



> You would notice it if that 500 wasn't enough for physics or that 2500 wasn't enough to get decent framerates.  Physx in most games only take, what? maybe 2 GFLOPS?  Better to put it on the CPU which is far better at multitasking.



It takes way more than that. In Batman my Quad jumped to 60% load with the "simple"** smoke, as I showed above, that's around 30 GFlops. No game that has been realeased is representative of GPU physics anyway, that's the only things that developers want to do right now, considering the lack of support. Because of the SIMD nature of current GPUs, it's just easier for them to dedicate one SP cluster (16-24 SPs) to PhysX or around 100-120 GFlops, thaat obviously aren't being used. In that sense in this generation of cards, you are losing one cluster all the time, but they are not using all the available power. That won't happen n the future thatnks to new architectures and specially MIMD.

** Simple compared to what GPU PhysX can do, but it's still way more complex than any other smoke seen in a game to date.



> How do you know they aren't falling back on a different (Havok, or something similar), CPU-based physics engine when there is not a PhysX enabled device present (for the sake of not killing performance)?  I haven't heard/seen any commentary on what developers thought of PhysX and how they implement it.



How many times do I have to explain this? PhysX is a multiplatform API that can run on the CPU or the GPU (or Ageia PPU, Cell, Xenos). It will take as much as it can from everywhere. If there is no CUDA compatible card or Ageia PPU, it runs everything in the CPU***. There's no difference between the (GPU) expanded mode and the standard mode, except that it adds a lot of detail*. Why do you think you can run the enhanced mode without a Nvidia card otherwise?

*It's no different from object detail, or texture size. The game will try to run identically, but will be affected performance wise if there is not enough available power.

*** And because it's an API, even though it's a complete physics engine it comes in API form, it's integrated in the game engine and will follow the rules specified by the engine. That means that it will be limited to the use of as many cores as the game developer wanted the game to run, based on their targeted audience.



> Um, and DirectX only includes compute shaders which could accelerate physics calculations; I see nothing that suggests DirectX includes an open standard for calculating physics.  That said, NVIDIA could make PhysX compute shader compatible (which they won't) so that it could run on AMD cards too.  That doesn't mean DirectX 11 will "kill" PhysX or Havok, or any other physics engine out there.  Kind of sad, actually...



The existence of 3rd party physics developers is a good thing actually. Not only they save up a lot of money and time to develpers, but they also have a higher expertise than them. Would you want Dell starting to make CPUs, GPUs, ram, etc, instead of buying them from other comanies that are 100% dedicated to their respective products? Outsourcing is essential nowadays.


----------



## RejZoR (Sep 29, 2009)

@Benetanegia
You're missing just one fact. I (we) have seen all these fancy physics effects done on CPU a decade ago. And now they're feeding us some propertiary crap that works only on GeForce cards and doesn't offer us absolutely NOTHING new or exciting. Don't tell me few flying papers and crappy fog makes you hard. All this was done a decade ago.

Also the "it's open yadidiadida" is complete BS. Yeah it's open to developers but closed to end users. So even though developers can implement it, users with ATI, Intel, S3 or any other than NVIDIA cards cannot use it at all. I'm sure ATI would add support for PhysX if it would trully be open standard like DirectX or OpenGL is. But would you as developer waste months of development just so those half of users could taste what you've done? I think not. It's just not worth the effort. And so NVIDIA is showing us their crap, developers are hesitating and physics are stagnating or even going backwards. Because i'm not seeing any progress what so ever.


----------



## Benetanegia (Sep 29, 2009)

RejZoR said:


> @Benetanegia
> You're missing just one fact. I (we) have seen all these fancy physics effects done on CPU a decade ago. And now they're feeding us some propertiary crap that works only on GeForce cards and doesn't offer us absolutely NOTHING new or exciting. Don't tell me few flying papers and crappy fog makes you hard. All this was done a decade ago.
> 
> Also the "it's open yadidiadida" is complete BS. Yeah it's open to developers but closed to end users. So even though developers can implement it, users with ATI, Intel, S3 or any other than NVIDIA cards cannot use it at all. I'm sure ATI would add support for PhysX if it would trully be open standard like DirectX or OpenGL is. But would you as developer waste months of development just so those half of users could taste what you've done? I think not. It's just not worth the effort. And so NVIDIA is showing us their crap, developers are hesitating and physics are stagnating or even going backwards. Because i'm not seeing any progress what so ever.



Show me where have you seen those effect a decade ago please? And more importantly in such high cuantities and definition. Please dont' come to me with the "in the first splinter cell..."*.Fact is there is not.

*A cloth simulation that had 6 nodes and 6 polygons. Please...

Show me the realistic and detailed smoke that you (or any object) can displace and has many small particles and not few big ones. Show me the meta-particle water. Show me lots and lots of sparks that bounce on actual geometry and not fall down through the ground. Show me cloth with more than... 

http://www.youtube.com/watch?v=g_11T0jficE - Come on tell me both sides are equally compelling. Show me something like this in any game.
http://www.youtube.com/watch?v=luSAnouAFJs - Come on, show me smoke, show me cloth.
http://www.youtube.com/watch?v=vrUYX7R53LY
http://www.youtube.com/watch?v=FcqDzdwzaEU&NR=1

*Show me this running on a CPU :* http://www.youtube.com/watch?v=IJ0HNHO5Uik - Especially the second half of the video.

Of course you can simulate the same effect on a cpu, but not to the extent that you can on a GPU, not in the same quantities that are necesary for realism. And again the games that have been released are not representative of what it can be done, because of lack of support and that's AMD's and only AMD's fault. Current games barely use a 25% of the power that a 16 SP cluster can give, GTX2xx cards have 15 times that, GT300 will have 30 times that.

And *it's* open, and it would have been open to Ati users if when Nvidia gave PhysX to Ati for free with no conditions they had said yes, or when the guy from ngohq.com made one moded driver that made it posible, if they had supported him, like Nvidia did, instead of scare the hell out of him with demands. But of course, back then Ati had nothing to compete in that arena and Nvidia cards would have destroyed their cards, so they said no no nooo! And now poor Ati users can't do anything but cry, and say it's not that great. 

Anyway, they are NOT FORCING you to use the expanded physics mode, and the normal mode is NOT any worse than other games' physics, *that* is the fact, so why all this crying I ask? Yeah, exactly, because Nvidia users can and you can't. It's that simple. I don't pay anything more to have those effects, you don't need to pay either nor you have to enable them if you don't want or you think they add nothing. Again, if they add nothing why all this crying? AAhhh jealousy...


----------



## Benetanegia (Sep 29, 2009)

I just saw this in TechReport:

http://techreport.com/discussions.x/17671

In resume, for what they would have needed 8000 CPU cores (1000 8p servers), they are using 48 servers with the 2 GPU Tesla each. Orders of magnitude cheaper and cheaper to mantain, cool and in power bills.

Say goodbye while you are leaving expensive CPU. Be polite, die with honor.


----------



## KainXS (Sep 29, 2009)

The future is not going to be CPU's or GPU's its going to be a mix

IE . . . . I hate to say it but a design similar to Larrabees will be CPU's or the future, even though I pray Larrabee fails.


----------



## HalfAHertz (Sep 29, 2009)

Physx is not open source, just like DX. The programers don't have to pay to code it in now, but if at a later point in time Nvidia decides to cash in on it(and they will), they will be backed by every legal system out there. The only difference here is that DX is already the widespread standard. Only Microsoft can change the code for DX, because they own it. This means that even if ATi accepted the Physx API, they woudn't be able to make any changes or optimisations to it for their hardware. The only one that can change the code is Nvidia. This in turn means that Ati will always have to play second fiddle with Nvidia due to poorly optimised code.


----------



## Benetanegia (Sep 29, 2009)

KainXS said:


> The future is not going to be CPU's or GPU's its going to be a mix
> 
> IE . . . . I hate to say it but a design similar to Larrabees will be CPU's or the future, even though I pray Larrabee fails.



Of course it will be a mix, but not a mix in the same die, at least not for performance and high-end markets. That would be a suicide, a GPU die is already big, later CPU dies are too, a mix would be imposible. Imagine a 1000 mm^2 thing that has a 400w TDP. No way.

But right now if you have to spend $500 on a CPU and a GPU, if you are a gamer you will spend $200 on a CPU and $300 in the GPU, but if you are a scientist, video editor or an economist(lol) you would usually spend $450 on the CPU and $50 on the GPU. In the future it will most probably be $150 and $350, respectively, if not even more in favor of the GPU.


----------



## RejZoR (Sep 29, 2009)

@Benetanegia
You still don't understand. In the past we at least had 6 nodes and 6 poligons. Today we don't even have that. The object is not even there in non-PhysX mode. Thats just plain dumb.
We could talk about normal version with 50 nodes and just as much polygons and about hi-def objects with thousands of nodes and twice as much polygons. But we have no normal version and hi-def one. Thats wtf for me.

Particles? I've seen them in GeForce 2. Lightning Demo. Or Quake 3 Arena with CorkScrew mod.
Ralgun slug emitted up to 999 physically affected sparks upon hitting the wall. And that was like in year 2000. Glass breaking in Red Faction 2001 looked better than PhysX one in Mirror's Edge. Except the one in Red Faction could run on AthlonXP 2400+ easily while PhysX glass cannot run smooth on 4GHz i7 Quad core. If that doesn't rise any of your eyebrows, then i'm not sure what will.

Cloth simulation was already done with Unreal Engine (just look at the flags uin UT99 they look freakin awesome even today and it's a game from 1999). Not at such extent, but it was done. On 10 times weaker hardware. Today? The flags are just gone. Missing. Not there. Unless you have "uber" PhysX gfx card. Pheh. Clothing was also simulated in Hitman series and Splinter Cell. Again in a slightly simplified way, but it was there. Running smooth on AthlonXP 2400+.
Don't tell me they couldn't do all that with 10 fold of everything on today' hardware.
They instead rather remove that object than make it normal definition.

I don't care how much horse power you need today or how many clusters gfx cards have. That's completely irrelevant information. The most important thing is relation between time, hardware performance increases in this time compared to what we've seen in the past. 
If we've seen basic cloth simulation, advanced particles, destruction, ragdolls etc in year 2000 on funny weak hardware (for today's standards), one would expect something at least on that level or improved by 10 times of that today on powerful quad cores, loads of memory and 10 times faster graphic cards. But have we seen any of it? Ok, partially on PhysX enabled graphic cards. But what about CPU physics? Flags are just gone, broken glass just fades out even before it hits the ground, static environments etc. Thank god at least ragdolls remained.
I wouldn't mind blocky flags like the ones in games from 2000. At least it was there and i could interact with it. But i don't even have the flag there because i don't have GeForce card. It's just gone. Entirely. LMAO.



> And it's open, and it would have been open to Ati users if when Nvidia gave PhysX to Ati for free with no conditions they had said yes, or when the guy from ngohq.com made one moded driver that made it posible, if they had supported him, like Nvidia did, instead of scare the hell out of him with demands. But of course, back then Ati had nothing to compete in that arena and Nvidia cards would have destroyed their cards, so they said no no nooo! And now poor Ati users can't do anything but cry, and say it's not that great.



You're not looking at the bigger picture. Sure the guy at NGOHQ made PhysX hack. But imagine all the crap users would be throwing at ATI if this hack failed to work properly in certain games or if games were crashing. No one would blame NGOHQ, they would rush blaming ATI instead. Been there, seen that numerous times with Microsoft. It was NVIDIA driver that crashed and the users were spitting over Windows Vista and how crappy it is made. But it wasn't even Vista's fault. It was NVIDIA's driver (or any other in that matter) that crashed.
So i perfectly understand why ATI refused to cooperate. They'd go the PhysX way if NVIDIA would send them their entire documentation, SDK's, everything, with same full capability as NVIDIA. But supporting a hack made by 3rd party, that's just not logical. Sorry. The same reason why laptop companies don't support any other graphic drivers than the ones on their webpage? Because of the exact same reason. Troubleshooting and tech support and bad word that could spread about brand "X" because someone with hacked drivers fried his graphic card or something.

I don't mind PhysX, it's a great thing actually. I just hate the way how NVIDIA is pushing it around and placing stupid restrictions on it. And all these stupid restrictions are just damaging evolution of games and physics. Imagine what would happen if PhysX was an open standard that anyone could implement and use on ANY graphic card. I can bet 1000 eur that we'd see at least 5 high profile games with awesome physics effects that everyone could enjoy. Not just GeForce users. It's really that simple.


----------



## Benetanegia (Sep 29, 2009)

> But imagine all the crap users would be throwing at ATI if this hack failed to work properly in certain games or if games were crashing.



I agree on many points you made there, regarding not having the simple one and all, it sucks but please, pay attention to what I quoted, your own words, and after thinking it, tell me: the lack of those things is Nvidia's fault or is developers fault? It's the API's fault or how the API has been used by the developer? Why are you blaming Nvidia?

EDIT: In Mirror's edge flags and most cloth objects are replaced by simpler animated ones for example. Different developer different decision.

The same sentence is valid for another thing that has been questioned here. That Nvidia retired the option of running Ati+Nvidia for physics. Was a malicious move, or was it a business decision made by the fact that they couldn't test properly if it would work well with all the cards. Newer Ati cards? What would happen after an Ati driver update? Would it still work? Who would get the blame if it didn't work? And why would they have to do all the research, while it was Ati who would have the benefit? Yeah, Nvidia would benefit a bit too, but in their opinion probably the money they would have to spend was more than what they would get. Bussiness decision, end of story.

Sorry, but I have to  make the question of why so many of you Ati owners can think so thoroughtly about some things, as you did above, but other times, you can't come up with what I have just said? If that is not bias, what it is then?


----------



## FordGT90Concept (Sep 29, 2009)

Benetanegia said:


> Tell me a task that requires heavy amount of complex calculations, that can't be split into simple ones, as is the case with F@H.


Encoding/Decoding.  It is more economical to do on 10 CPUs than 1 GPU.  Not to mention how much more memory processors have available to them compared to GPUs and the more directly link to the hard drive(s).

There's no SSE instructions designed to help with physics which F@H uses.  As such, they have to brute force it and the bigger your hammer, the more damage it does.  An instruction set designed for physic calculations would pretty much put CPUs back on top.



If you dedicated SPs, obviously they are being used: for physics, not for graphics.




Benetanegia said:


> ** Simple compared to what GPU PhysX can do, but it's still way more complex than any other smoke seen in a game to date.


Can't say I noticed.  And by the way, I do remember the smoke in Arkham Asylum pissing me off.  I don't remember if it was because of a framerate drop (8800 GT as well, albeit sickly for the last few months) or I couldn't see shit.  Either way, it annoyed me and would be better off without.




Benetanegia said:


> How many times do I have to explain this? PhysX is a multiplatform API that can run on the CPU or the GPU (or Ageia PPU, Cell, Xenos). *It will take as much as it can from everywhere.* If there is no CUDA compatible card or Ageia PPU, it runs everything in the CPU***. There's no difference between the (GPU) expanded mode and the standard mode, except that it adds a lot of detail*. Why do you think you can run the enhanced mode without a Nvidia card otherwise?


Have you examined NVIDIA's source code to confirm that?  Of course not--it's closed source.  What we do know is that PhysX is extremely biased towards NVIDIA/Ageia hardware and as such, it is bad for the market.





Benetanegia said:


> The existence of 3rd party physics developers is a good thing actually. Not only they save up a lot of money and time to develpers, but they also have a higher expertise than them. Would you want Dell starting to make CPUs, GPUs, ram, etc, instead of buying them from other comanies that are 100% dedicated to their respective products? Outsourcing is essential nowadays.


If there was an open standard for physics, money wouldn't be involved.  It would work as well as manufacturers and developers make it work.  Truth be told, I doubt there is enough demand for a scientific physics API because games don't need 100% accurate physics-- they need 70% accurate physics which means at least a 10,000% reduction in work load.  Accurate phyisics is about the only thing in game develop few people care about.  Hell, the last game I saw that used physics on bullet trajectories to a positive end was back in the 1990s with Delta Force: Land Warrior and Task Force Dagger.  That was nice.  Did it require a beefy CPU and GPU?  Nope and nope.  Basic physics is more than enough.  I'd rather they focus their attention on gameplay mechanics like adding more variety to side missions.

If Dell makes a good product, why not?  Competition is good--open standards breed fair competition.


----------



## HalfAHertz (Sep 29, 2009)

Read my post. Physx is a proprietary software coded for NVIDIA HARDWARE. Ati cannot change anything about it, so the only way to come close to the same level of performance is to mimic NVIDIAs hardware...Now I do hope you see the problem here.


----------



## KainXS (Sep 29, 2009)

Well, when you ask nvidia's support you get a gray answer, you get the it could have been for business purposes or it could have been because of stability concerns, 

but 1 thing is for sure despite that, using nvidia cards with physX worked perfectly fine back when I had my old HD4870X2 with my 8600GTS, like 5 months ago, then I sold my 4870x2 and got my old 8800GS, now on the newer drivers when I use my GS for physX, it dosen't work with my friends HD4890, but on the older ones it works perfectly, 

it makes you wonder, maybe they added a feature to the driver and it caused a bug I don't know, but it worked before nicely so I don't know what happened, but it dosen't really affect me anymore since I don't have a ATI card right now so I don really care either.


But I look at it like this, Nvidia is a company, companies goals are to make money, and if people buy their cards and can't use them(even though they could before) unless they remove their competitors card, then thats a nice business stance right there(If not the BEST),


----------



## FordGT90Concept (Sep 29, 2009)

@HalfAHertz & KainXS:  Exactly why I think a lawsuit is brewing.  NVIDIA is taking part in anticompetitive behavior towards AMD.


----------



## Benetanegia (Sep 29, 2009)

FordGT90Concept said:


> Encoding/Decoding.  It is more economical to do on 10 CPUs than 1 GPU.  Not to mention how much more memory processors have available to them compared to GPUs and the more directly link to the hard drive(s).



WTH, if there's one thing that is done much much faster with the help of the GPU that's encoding/decoding, You've been living in a cave?



> If you dedicated SPs, obviously they are being used: for physics, not for graphics.



SIMD (*Single* Instruction Multiple Data) means that if you have to use 4 SPs, you have to "use" all of them, that is 16 or 24, but trully you are only using 4...




> Can't say I noticed.  And by the way, I do remember the smoke in Arkham Asylum pissing me off.  I don't remember if it was because of a framerate drop (8800 GT as well, albeit sickly for the last few months) or I couldn't see shit.  Either way, it annoyed me and would be better off without.



What can I say except, meh. The I didn't notice excuse is too used man. Look at the links I provided above, if you want to know about what GPU physics can do.



> Have you examined NVIDIA's source code to confirm that?  Of course not--it's closed source.  What we do know is that PhysX is extremely biased towards NVIDIA/Ageia hardware and as such, it is bad for the market.



If you pay, you have access to the code and you can change it too. That's no different from Havok.The difference is that PhysX costs $50.000 + $1000 per developer, while Havok costed $200.000 last time I checked.



> If there was an open standard for physics, money wouldn't be involved.  It would work as well as manufacturers and developers make it work.  Truth be told, I doubt there is enough demand for a scientific physics API because games don't need 100% accurate physics-- they need 70% accurate physics which means at least a 10,000% reduction in work load.  Accurate phyisics is about the only thing in game develop few people care about.  Hell, the last game I saw that used physics on bullet trajectories to a positive end was back in the 1990s with Delta Force: Land Warrior and Task Force Dagger.  That was nice.  Did it require a beefy CPU and GPU?  Nope and nope.  Basic physics is more than enough.  I'd rather they focus their attention on gameplay mechanics like adding more variety to side missions.



So everything is based on I (FordGT90) want this, I want that and I don't care about physics so to hell with them. XD

STALKER has good physics based bullet trajectories and ARMA too BTW.



> If Dell makes a good product, why not?



And you would pay twice?



> Competition is good--open standards breed fair competition.



Yeah, only problem is there is none, but I see too much PhysX bashing and no Havok bashing. Open minds are as required as open standards.



HalfAHertz said:


> Read my post. Physx is a proprietary software coded for NVIDIA HARDWARE. Ati cannot change anything about it, so the only way to come close to the same level of performance is to mimic NVIDIAs hardware...Now I do hope you see the problem here.



Can I edit?



HalfAHertz said:


> Read my post. Havok is a proprietary software coded for Intel hardware. Ati cannot change anything about it, so the only way to come close to the same level of performance is to mimic Intel hardware...Now I do hope you see the problem here.



But, they see no problem using Havok. How so?


----------



## FordGT90Concept (Sep 29, 2009)

I'm not getting in another pissing contest with you.


----------



## Benetanegia (Sep 29, 2009)

Sorry, but it's the truth, every time we have a discussion about physics in games, you come up with the same: I rather see this or I rather see that and I don't think it adds anything. You are not even close to accepting that a lot of people might want other things than the ones you want. 

Do I want better game mechanics? Of course. 

Will the lack of better physics ensure better or different game mechanics? No and no. On the contrary, the inclusion of better physics does nothing but increase the options for new game mechanics.

PhysX is in no conflict with anything else in games. More so GPU PhysX is just an added feature that doesn't interfer with the game. Is BM:AA without GPU physics any worse than other games? No, so then why all the bashing, it should end up there. Seriously.


----------



## RejZoR (Sep 29, 2009)

@Benetanegia
You seem to have answer for everything... but not much makes sense.
Why are you bringing Havok into all this. It works on ANY CPU. So what has ATI to do with it?
I can run Havok games on VIA, Intel or AMD CPU. It doesn't matter. So, do i care if it's proprietary technology? No, not really. Besides, where have you seen anything that Havok is proprietary, coded specifically for Intel? Because that's just not true. Intel owns the Havok brand and the team behind it, but they don't make it proprietary because of that.


----------



## Deleted member 24505 (Sep 29, 2009)

I think physx sucks and always did.


----------



## rpsgc (Sep 29, 2009)

Just ignore the fanboy.


----------



## Benetanegia (Sep 29, 2009)

RejZoR said:


> @Benetanegia
> You seem to have answer for everything... but not much makes sense.
> Why are you bringing Havok into all this. It works on ANY CPU. So what has ATI to do with it?
> I can run Havok games on VIA, Intel or AMD CPU. It doesn't matter. So, do i care if it's proprietary technology? No, not really. Besides, where have you seen anything that Havok is proprietary, *coded specifically for Intel?* Because that's just not true. Intel owns the Havok brand and the team behind it, but they don't make it proprietary because of that.



Why I bring Havok into this? Because it's been said more than once that PhysX should die and Havok should be used instead. I'm bringing it just for comparison. PhysX runs on every CPU too, it's not a propietary API that runs only on Nvidia hardware, not at all. If you want the expanded capabilities then yes, but if Nvidia didn't push for GPU PhysX in those games you would get the same as if you disable GPU PhysX. If you can't run GPU PhysX you are not getting an inferior product.

You ask why I say that Havok is coded specifically to run better on Intel. How do you know it isn't? How do you know that the reason that Intel CPU have been better for games almost always, even when AMD CPUs were much faster in general computing was not because of that? When if there is one company that has been caught in unloyal and illegal behavior that is Intel. I bring in Havok, because there's as much proof of that happening as there is of that happening with PhysX, that is NONE.


----------



## RejZoR (Sep 29, 2009)

Lol, you're just not getting it. Usual crap PhysX runs on every CPU. But HW PhysX doesn't. Can't you separate that apart!?
And how do i know it's not coded for Intel? Um, maybe because i was running it on AMD CPU smoothly? Isn't that proof enough by itself?


----------



## Benetanegia (Sep 29, 2009)

RejZoR said:


> Lol, you're just not getting it. Usual crap PhysX runs on every CPU. But HW PhysX doesn't. Can't you separate that apart!?
> And how do i know it's not coded for Intel? Um, maybe because i was running it on AMD CPU smoothly? Isn't that proof enough by itself?



By crap PhysX, you mean the ones that are as good as Havok, that crap? The one that runs on AMD CPUs just as well as on Intel or Via ones? What you don't get is that AMD doesn't want PhysX accelerated on their graphics cards and that's the end of the story. When running on the CPU it runs as well in AMD as it does in Intel. The Good PhysX can't run on CPUs, period, it's time you get that already.  

If you think they can run, it's time you show me equivalent physics running on CPUs. I'll save you time, there's none. It's not until Havok has started the GPU Havok until they have started doing the same things. Do you get it? Nvidia wanted Ati/AMD to use PhysX, it was AMD who didn't want. Nvidia is NOT making PhysX run better on their hardware then in the competition, simply because it doesn't run in the competition at his competitor's request.


----------



## HalfAHertz (Sep 29, 2009)

Benetanegia said:


> Why I bring Havok into this? Because it's been said more than once that PhysX should die and Havok should be used instead. I'm bringing it just for comparison. PhysX runs on every CPU too, it's not a propietary API that runs only on Nvidia hardware, not at all. If you want the expanded capabilities then yes, but if Nvidia didn't push for GPU PhysX in those games you would get the same as if you disable GPU PhysX. If you can't run GPU PhysX you are not getting an inferior product.
> 
> You ask why I say that Havok is coded specifically to run better on Intel. How do you know it isn't? How do you know that the reason that Intel CPU have been better for games almost always, even when AMD CPUs were much faster in general computing was not because of that? When if there is one company that has been caught in unloyal and illegal behavior that is Intel. I bring in Havok, because there's as much proof of that happening as there is of that happening with PhysX, that is NONE.



IMO this is a mute point in the discussion. You are not being subjective.The Physx code can run on a x86 CPU but is not optimised for it. It was only optimised to run on the Ageia PPU, just as it is currently only optimised to run on Nvidia graphics. It was never meant to run good on a CPU, because neither Ageia nor Nvidia produced CPUs. This will not change untill the Physx API becomes open source and somebody interested from developing it further picks it up.
   Currently no one can change the code except the proprietor. Nvidia is not going to waste their time and money optimising in for x86 with no forceable financial profit, because in the end they sell GPUs, not CPUs.

   I hope that by now you see my point, i don't really want to waste my time explaining this any further.


----------



## [I.R.A]_FBi (Sep 29, 2009)

im for ignoring the fanboi ... anyone else?


----------



## Benetanegia (Sep 29, 2009)

HalfAHertz said:


> IMO this is a mute point in the discussion. You are not being subjective.The Physx code can run on a x86 CPU but is not optimised for it. It was only optimised to run on the Ageia PPU, *just as it is currently only optimised to run on Nvidia graphics. It was never meant to run good on a CPU, because neither Ageia nor Nvidia produced CPUs.* This will not change untill the Physx API becomes open source and somebody interested from developing it furthe,r picks it up.
> Currently no one can change the code except the proprietor. Nvidia is not going to waste their time and money optimising in for x86 with no forceable financial profit, because in the end they sell GPUs, not CPUs.
> 
> I hope that by now you see my point, i don't really want to waste my time explaining this any further.



False. Absolutely false. PhysX is running in over *100* games and it's doing very well, I mean that it is well optimized. *GPU GPU GPU GPU GPU GPU GPU GPU PhysX, you get it? GPU PhysX* noooooooo it's not optimized to run on the CPU, big surprise! It's GPU PhysX! Ey! It's GPU PhysX here, I'm not optimized to run well on CPUs, that's why my friend CPU PhysX comes along with me!

So as long as CPU PhysX runs as well as other physics APIs and it does, then everything is well. Did I explain myself now??

List of games that use hardware accelerated physics: http://www.nzone.com/object/nzone_physxgames_home.html
List that use PhysX: http://www.nzone.com/object/nzone_physxgames_all.html

I have to say one more thing, maybe you guys understand this way:

Graphics wise. If a game has been developed to run on a 8800 or faster and that's the minimum requirement, do you try to run it on a 8400? I think not right? That's the same. They could make it run on the CPU, sure, if they dumbed down the physics a lot, but then they would be just as crap as the normal CPU version is.

To quote yourself:

I hope that by now you see my point, i don't really want to waste my time explaining this any further.


----------



## HalfAHertz (Sep 29, 2009)

Benetanegia said:


> False. Absolutely false. PhysX is running in over *100* games and it's doing very well, I mean that it is well optimized. *GPU GPU GPU GPU GPU GPU GPU GPU PhysX, you get it? GPU PhysX* noooooooo it's not optimized to run on the CPU, big surprise! It's GPU PhysX! Ey! It's GPU PhysX here, I'm not optimized to run well on CPUs, that's why my friend CPU PhysX comes along with me!
> 
> So as long as CPU PhysX runs as well as other physics APIs and it does, then everything is well. Did I explain myself now??
> 
> ...



So, in a way you're contradicting yourself, correct? You yourself provided two seperate links. One of games running the dumbed down and simplified CPU Physx and a seperate one using the more advanced GPU Physx.


----------



## Benetanegia (Sep 29, 2009)

HalfAHertz said:


> So, in a way you're contradicting yourself, correct? You yourself provided two seperate links. One of games running the dumbed down and simplified CPU Physx and a seperate one using the more advanced GPU Physx.



No I'm not contradicting myself. How so?

Both are the same PhysX, both run the same libraries.The difference relies in the number of calculations being made. The develper when creating the GPU version, it creates it using the available power in GPU which is an order of magnitude bigger than that in the CPU. That's why it's called GPU version, because it requires too much power and only GPUs are capable of that, well or the PPU, which is basically the same. Develpers on their own, they would use only one version, and that version must run in every PC out there and in the consoles, so it's pretty dumbed down. This is no different with Havok (I name it because along with PhysX they have 50% of the games...) or any other API being in use. To back this up, I took some screenshots in previous posts. So Nvidia convinced them to make one more version, and they make it strong enough so that it's worth the effort and for kicks of course. The developers wouldn't include flags, papers and such if they were not creating the GPU version. When I talk about the versions, take it as if I was saying lowtextures/high textures, the difference does not exist in the form, but in the number of calculations being made.


----------



## RejZoR (Sep 29, 2009)

Nevermind. Arguing with someone who clearly doesn't understand game physics is not fun at all.


----------



## Benetanegia (Sep 29, 2009)

RejZoR said:


> Nevermind. Arguing with someone who clearly doesn't understand game physics is not fun at all.



Much better than most of you apparently.


----------



## Drizzt5 (Sep 29, 2009)

erocker said:


> You know who I would like to step in here? Microsoft. For the most part, Windows is the road these cards drive on like we drive are cars down the road. We as motorists are restricted to a set of standards (speed limits, safety equipment, etc.) that we must conform to. I'd like to see Microsoft step up and say "Ok, this is the way it is going to be done." Setup standards for Windows and work in collaboration with hardware manufacturers. Have unified physics and the like, and let the video card companies duel it out through performance.



I don't think Microsoft cares though...
For gaming they are all about the xbox 360.

I wish they would though


----------



## SNiiPE_DoGG (Sep 29, 2009)

Point A) OMG trash on the ground, that is the best part about physX!

Point B) all of the games that use PhysX suck the big one, do I really need to post links? (and No UTIII is not a physx game, it has one level that uses phyx and you need to DL it separately)

Point C) your completely missing the point that all of the magic wonderful amazing GPU *PHYSX* CRAP you are promoting can easily be run on a CPU, now, today using a proper cpu physics engine. MOST IMPORTANTLY: Without a hit to FPS in the game!

now refute the points WITHOUT changing the subject or *GTFO*


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> Point A) OMG trash on the ground, that is the best part about physX!



That is showing the power of the GPU running physics. And you have never seen anything like that running in the CPU, otherwise link. What you (all I must say) fail to see, is that "anyone" can create almost any effect, it's simple maths after all. The key is in how many calculations you can make. The way of making smoke or cloth look better is by having more particles or nodes.



> Point B) all of the games that use PhysX suck the big one, do I really need to post links? (and No UTIII is not a physx game, it has one level that uses phyx and you need to DL it separately)



UT3 does use PhysX in the regular game. It has one level that uses *GPU accelerated PhysX* and in reality they are 3.

And yeah all of them suck? I don't know your tastes, but Mass Effect sucks? Gears Of War sucks? Age of Empires III sucks? Brothers In Arm: HH sucks? City of Villains sucks? Tom Clancy games sucks? Virtua Tennis 3 sucks? Mirror's Edge sucks?

I dunno, the buyers seem to differ with you.



> Point C) your completely missing the point that all of the magic wonderful amazing GPU *PHYSX* CRAP you are promoting can easily be run on a CPU, now, today using a proper cpu physics engine. MOST IMPORTANTLY: Without a hit to FPS in the game!



And which is that proper magical physics engine that is nowhere to be found? Why is Havok so stupid that they don't make it for once? No, the reality is that a CPU can't do the ammout of physics that are done in Nvidia cards right now and that's why Havok, AMD, Intel and M$ are putting so much effort and money into implementing their own GPU accelerated physics, while brainwashing all of you until they have their own competing product. Time to a reality check.

now refute the points WITHOUT changing the subject or *GTFO*[/QUOTE]


----------



## SNiiPE_DoGG (Sep 29, 2009)

the Confirmation Bias is strong with this one....

I wish I could find the demo of i7 running physics on Havoc with thousands of entities.... google is not turning it up though - was in the news like ~4 months ago IIRC


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> the Confirmation Bias is strong with this one....
> 
> I wish I could find the demo of i7 running physics on Havoc with thousands of entities.... google is not turning it up though - was in the news like ~4 months ago IIRC



yeah I wish you could because no combination of physics+core i7+havok gives any result. What it does appear a lot is the OpenCL accelerated Havok running in AMD cards, which showed many entities indeed and was about 4 months ago IIRC. I think you are confused with that. I read the tech press everyday and I have never heard of that, plus there's nothing in google about that, so IMO it's fake until I see it. I'll continue sarching.

And I'm not biased at all. I've tried many physics demos from various companies and open source and I have a very decent knowlegde about what my CPU can handle. A core i7 can handle twice. Until you show me something, your claim is a lie.


----------



## KainXS (Sep 29, 2009)

Good Lord man, you have been here since early this morining defending PhysX, give up on it, PhysX is and has always been coded like sh*t for cpu's,  . . . . why, because PhysX was originally one thing, ageia's cash cow, in software mode it dosen't do anything but waste cpu cycles in my opinion, I look at on one hand Havok, which runs very very good on cpu's it may have a few less features then but its worth it in the end because its universally compatible and you don't need a super fast CPU or a certain GPU or card to use it, for devs theres not much of a risk. then I look at PhysX, I remember back PhysX came out, the GRAW tests with PhysX, they were absolutely terrible, its improved a bit since then but not as much as it needs to to really take off.

Now with the possiblilty that DX11 could be a decent alternative for free with no strings attached, I say let PhysX and Havok die, for a while now, Havok has been on its deathbed, and I think PhysX its on its deathbed also since you can't use gpu physX at all if you have an ATI card now, I say go DX11.

I'm out.


----------



## Benetanegia (Sep 29, 2009)

KainXS said:


> Good Lord man, you have been here since early this morining defending PhysX, give up on it, PhysX is and has always been coded like sh*t for cpu's,  . . . . why, because PhysX was originally one thing, ageia's cash cow, in software mode it



Ageia has always have a CPU path and it ran very well in comparison to other CPU physics engines at the time including Havok. Ghost recon was poorly coded, period. It has nothing to do with the API. Havok has been used in better games in the past, that's the difference. Like HL2, it´s Valve's merit and not Havok's, but havok has always taken the merit for that.


----------



## SNiiPE_DoGG (Sep 29, 2009)

Found it! wasnt havok, I was mistaken, but on the CPU nonetheless 

http://www.viddler.com/explore/HardOCP/videos/36/


----------



## FordGT90Concept (Sep 29, 2009)

KainXS said:


> Now with the possiblilty that DX11 could be a decent alternative for free with no strings attached, I say let PhysX and Havok die, for a while now, Havok has been on its deathbed, and I think PhysX its on its deathbed also since you can't use gpu physX at all if you have an ATI card now, I say go DX11.


DirectX 11 has no physics APIs, only the capability to run physics code on any DX11 GPU using the compute shaders.  The question is: Who is going to author the physics API for DX11?  Once you have an answer to that, I think you'll have an answer for the most dominent phyisics API 10 years from now.


What has worked in the past and remains very popular is needs-based physics.  That is, you code it in house and make it as accurate as deemed necessary.  An open, free-to-everyone, physics standard isn't very likely.


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> Found it! wasnt havok, I was mistaken, but on the CPU nonetheless
> 
> http://www.viddler.com/explore/HardOCP/videos/36/



Ok, those are some nice number of objects there for a processor. But you do realize that the one on the GPU has much much more processing going on right right? That's point number one. I don't remember if they say there, but in this one you posted is 1500 Boxes and 200 ragdolls are maxing out the i7. In one of the links I posted a car trail is being simulated with 250.000 particles, that's a lot more (like ten times more despite not being solids and ragdoll), and that ratio will never change. A GPU can still do much more things.

Point number two, I never said that a CPU like i7 using all of it's cores can't do a high ammount of physics, I said that the CPU that most people have won't be able to come even close (i7 is 5x faster than the average dual core, not to mention console CPUs and single cores), I ve been saying that constantly. Developers have to develop for the lower end if they want to sell. What's the percentage of people that have a core i7? Not even the 2%. It makes no sense to develop a game with that base in mind. And unless you scatter many boxes randomly, you will have to redesign and retest for every performance level you want to develop for. That's what I've been saying, no game developer wants to make more than one, so they just use one thread because that's what will run everywhere. But with the GPU, you KNOW the player will have enough power. Like I said a sub $100 card already has almost 1 TFlop of processing power.


----------



## SNiiPE_DoGG (Sep 29, 2009)

the people who are really interested in running physics at high res and high graphics settings have the i7 or other processor power to run on the CPU, dual cores are dead to the high end market. - and they had 3500 boxes going at the end of the video (no ragdoll at that point)


----------



## Benetanegia (Sep 29, 2009)

FordGT90Concept said:


> DirectX 11 has no physics APIs, only the capability to run physics code on any DX11 GPU using the compute shaders.  The question is: Who is going to author the physics API for DX11?  Once you have an answer to that, I think you'll have an answer for the most dominent phyisics API 10 years from now.
> 
> 
> What has worked in the past and remains very popular is needs-based physics.  That is, you code it in house and make it as accurate as deemed necessary.  An open, free-to-everyone, physics standard isn't very likely.



It's becoming popular to make an in-house game engine, because almost all the developers are becomeing bigger and are creating more than one house or are buying others, etc. It could happen that they eventually do the same with physics, but as of now they are primarily looking into third parties for the most part. According to this at least: http://www.bulletphysics.com/wordpress/ -  Scroll down to Figure 10, most popular libraries.


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> the people who are really interested in running physics at high res and high graphics settings have the i7 or other processor power to run on the CPU, dual cores are dead to the high end market. - and they had 3500 boxes going at the end of the video (no ragdoll at that point)



I don't have an i7 nor I'm thinking in having one and I'm very interested in physics at high res. On the other hand as soon as GT300 launches, I will buy that or a HD5xxx depending on the pricing, performance etc, unless I don't find any game compelling enough, which sadly is more than likely.

Yeah 3500 boxes is a lot but in the one I posted there are clearly much much more than that. And as I said that's an i7 and we don't know if it's overcloked. The average gamer, even if it looks for performance in games, has a dual core, until recently most people used to recommend dual cores to everyone that wanted to build a gaming PC. That people who ask won't change their CPU anytime soon, but they will probably change their card.


----------



## SNiiPE_DoGG (Sep 29, 2009)

its a 965 @ 3.2  - I remember from the original article that was posted. I think that was more boxes that the physics moving trash video you posted.... how can you know without a count?


----------



## Benetanegia (Sep 29, 2009)

SNiiPE_DoGG said:


> how can you know without a count?



Density and size of the boxes relative to the radius of the tornado. It's a bold calculation.

The 250.000 is said by the guy at some point, in the one with the car trail.


----------



## jessicafae (Sep 29, 2009)

OK I checked the whole thread looking for this but could not find it.

OpenCL and DX11 compute shaders are similar, but different to CUDA and ATi stream.

OpenCL is designed for a "heterogenous compute environment" meaning many CPU, GPU, SSE vectors, Cell processors, Larrabee vector units, AMD bulldozer vector units and so on. It does "runtime" just-in-time-compiling (JIT) to determine available resources (and some load balancing) and is designed to provide a single programmer API and hide all the threading/hardware details. This means that any middle-ware or game engine coded to the OpenCL API will take advantage of ANY available resource whether it is some free unused SSE bandwidth, available CPU cores or available GPU shaders.  Since it does this at runtime, it means that there is the possibility that one frame might use SSE for for some physics calculations, but the next fame that physics code might run on some GPU shaders.  It also means that any code written to OpenCL will adjust to the hardware.  If one computer has a simple core2 duo but a quad GPU, it will do most calculations on the GPUs. If a different computer has 24x i7 cores and a simple GPU more of the calculations will be done on the i7 CPU/SSE units, if the code is run on a PS3 the OpenCL will use the Cell processors. There is a developer at our lab porting his parallel/threaded/SSE code to OpenCL on OSX snow leopard right now and loves the API and the performance.

I think DX11 ComputeShaders is targeting the same "hetrogenous compute" API concept as OpenCL. It is possible to code an engine using DX11 graphics API and OpenCL compute API and skip the DX11 compute shaders.  

CUDA and ATi Stream are GPU only APIs.  Nvidia's recent OpenCL toolkit is GPU only. ATi's windows/linux OpenCL toolkit is currently only CPU/SSE but the CPU/SSE/GPU version will be out soon.  Apple OSX 10.6 OpenCL toolkit is CPU/SSE/GPU right now and supports both Nvidia and ATi GPUs.

Havok has already ported sections of its toolkit to OpenCL.
ATi helped port the opensource Bullet physics toolkit to OpenCL.

And Nvidia is considering porting Physx to OpenCL
http://www.bit-tech.net/news/hardware/2009/03/27/nvidia-considers-porting-physx-to-opencl/1

Yes today Nvidia's Physx driver uses CUDA (or a CPU emulation layer in the Physx driver) to do Physx. But it is possible this may change.  If Nvidia does not port Physx to OpenCL or DX11 compute (and off of Cuda), they may loose any momentum Physx has gained.

Personnally I have Mirror's Edge, I love the game and the Physx effects, but I am definitely annoyed that I had to buy an Nvidia card to get the Physx effects to work properly.  Unfortunately we may have another couple years before Nvidia-only Physx/CUDA games disappear.  It will probably be 2+ years before games using OpenCL middleware will hit the market.

Sorry I had hoped this post would have been shorter.


----------



## Benetanegia (Sep 29, 2009)

I didn't knew Bullet was being ported to OpenCL already. That's interesting and could be the answer that so many people here are waiting for. Although I have always known that PhysX was going to be ported, because they'll have no option. I still think that most developers will go with PhysX or Havok at first, because support is probably much better.


----------



## jessicafae (Sep 29, 2009)

There is also the Pixelux physics middleware toolkit which is being used by LucasArts. Pixelux is also being ported to OpenCL

http://www.youtube.com/watch?v=VAnXolinMPE
http://www.youtube.com/watch?v=VAnXolinMPE&feature=related


----------



## EastCoasthandle (Sep 29, 2009)

SNiiPE_DoGG said:


> the Confirmation Bias is strong with this one....
> 
> I wish I could find the demo of i7 running physics on Havoc with thousands of entities.... google is not turning it up though - was in the news like ~4 months ago IIRC



Are you looking for this:
video
video
video The camera shakes "no" at 31 seconds.   Listen to what is said.


----------



## MrMilli (Sep 29, 2009)

EastCoasthandle said:


> Are you looking for this:
> video
> video
> video The camera shakes "no" at 31 seconds.   Listen to what is said.



Well that's what i'm expecting from a physics engine in 2009 - for it to be multi-threaded.
With Mark Randel being the man behind the Infernal Engine, i didn't expect anything less from VELOCITY Physics.
Ghostbusters: The video game has huge amount of physics running at +30fps just on the cpu. These days you can have a quad core for less than $100 ... why not use that cpu power?
From what i have read in the past, developers preferred Havok over Physx. It seems to me that the two main reasons why Physx gained traction are: It's free and it's built into the UT3 engine.
I know i'm repeating myself but like i have said already, nVidia can multi-thread the cpu version of Physx like in no time. The PPU and GPU version are already multi-threaded.
The fact that nVidia acquired Ageia in February 2008 and had a demo already running in april 2008 on a GPU says it all.


----------



## Benetanegia (Sep 29, 2009)

MrMilli said:


> Well that's what i'm expecting from a physics engine in 2009 - for it to be multi-threaded.
> With Mark Randel being the man behind the Infernal Engine, i didn't expect anything less from VELOCITY Physics.
> Ghostbusters: The video game has huge amount of physics running at +30fps just on the cpu. These days you can have a quad core for less than $100 ... why not use that cpu power?
> From what i have read in the past, developers preferred Havok over Physx. It seems to me that the two main reasons why Physx gained traction are: It's free and it's built into the UT3 engine.
> ...



As I see it, it is multi-threaded too. Some games do use more than one thread, like mirror's edge. I have already said this, but IMHO the limitation is probably on the game engine and not in the PhysX engine.


----------



## RejZoR (Sep 29, 2009)

Funny. I thought Unreal Engine 3 is multi-thraded. And since Mirror's Edge is based on it, i find that kinda ironic.


----------



## MrMilli (Sep 30, 2009)

Well RejZoR, i hope this answers your question:

http://www.anandtech.com/video/showdoc.aspx?i=3171&p=3

From source:


> We also have a quick look at CPU usage for UT3 when running this map, to get an idea of just how well the CPU is being used; if it’s not being well used by the software PhysX simulations, then CTF-Lighthouse in particular is not a fair comparison. UT3 is designed around dual-core processors with some light helper threads to occupy cores 3 and 4, so if physics are the bottleneck then the multithreaded software PhysX simulations should be able to completely load the CPU.
> 
> However we are not seeing this. CPU usage is hovering at a little below 60% total usage on our QX6850. Given that we're not GPU limited, one thread in particular must be the CPU limited thread, and isn't capable of being further split. Our first thought is that the thread(s) that handle game logic also are doing the software physics, which would explain why we're not seeing a greater use of the other cores for the physics calculations. It's unfortunate if this is the case though, as it means that Unreal Tournament 3 (and possibly other Unreal Engine 3 games) is unable to fully utilize more than 2 cores and change. It also means that we potentially could be getting better software performance, which works against the PhysX hardware.


----------



## FordGT90Concept (Sep 30, 2009)

Benetanegia said:


> It's becoming popular to make an in-house game engine, because almost all the developers are becomeing bigger and are creating more than one house or are buying others, etc. It could happen that they eventually do the same with physics, but as of now they are primarily looking into third parties for the most part. According to this at least: http://www.bulletphysics.com/wordpress/ -  Scroll down to Figure 10, most popular libraries.


It only adds up to 63.9%.  Where's the majority?  In house/custom?


----------



## Benetanegia (Sep 30, 2009)

FordGT90Concept said:


> It only adds up to 63.9%.  Where's the majority?  In house/custom?



First of all the rest is not the majority, ~64% is the majority. Nevermind. 

The rest is a mix between many other engines, some in house some not, according to the site where I first saw that chart posted. They even named a few but I don't remember. But yeah the others where probably engines that had been used in less than 5 games, but third party still. Others are in-house and probably meant to be used in many games releasing in a short time period (so that technical improvement is not expected). It's not cost effective to design a new one for every game.


----------



## FordGT90Concept (Sep 30, 2009)

I do believe a lot of that 36.1% is developers that don't use a third-party physics API.  Instead, they just use a collection of static formulae integrated into their game engine at some point.  They might use the same engine in multiple games but it doesn't fall under a separate physics engine or API.


----------



## Benetanegia (Sep 30, 2009)

FordGT90Concept said:


> I do believe a lot of that 36.1% is developers that don't use a third-party physics API.  Instead, they just use a collection of static formulae integrated into their game engine at some point.  They might use the same engine in multiple games but it doesn't fall under a separate physics engine or API.



I don't doubt that many of that 36% is using propietary engine, but as you very well pointed out is probably more like some extensions added within their engine, which is used (or is going to be used) in all of their games. What I mean is that most of the games that use a propietary engine come from big comanies, that will use that engine in most of their games. Take Ubisoft for example, they release a lot of internal games and most of them are based on the same engine. Crytek also has 3 studios and a lot of working projects (as many as 4 apparently), which use CryEngine so it does make sense to make their own thing. But smaller studios* have to concentrate on the game and not in the tech.

Anyway, IMO a lot of that 36% also comes from games that don't require a complete physics engine, because of how the game is. Many strategy games for example.

*Doesn't mean their games are small. Think Bioshock, think Mass Effect...


----------

