Friday, March 20th 2009
AMD to Demonstrate GPU Havok Physics Acceleration at GDC
GPU-accelerated physics is turning out to be the one part of specifications AMD is yearning for. One of NVIDIA's most profitable acquisitions in recent times, has been that of Ageia technologies, and its PhysX middleware API. NVIDIA went on to port the API to its proprietary CUDA GPGPU architecture, and is now using it as a significant PR-tool apart from a feature that is genuinely grabbing game developers' attention. In response to this move, AMD's initial reaction was to build strategic technology alliance with the main competitor of PhysX: Havok, despite its acquisition by Intel.
In the upcoming Game Developers Conference (GDC) event, AMD may materialize its plans to bring a GPU-accelerated version of Havok, which has till now been CPU-accelerated. The API has featured in several popular game titles such as Half Life 2, Max Payne II, and some other Valve Source-based titles. ATI's Terry Makedon, in his Twitter-feed has revealed that AMD would put forth its "ATI GPU Physics strategy." He also added that the company would present a tech-demonstration of Havok technology working in conjunction with ATI hardware. The physics API is expected to utilize OpenCL and AMD Stream.
Source:
bit-tech.net
In the upcoming Game Developers Conference (GDC) event, AMD may materialize its plans to bring a GPU-accelerated version of Havok, which has till now been CPU-accelerated. The API has featured in several popular game titles such as Half Life 2, Max Payne II, and some other Valve Source-based titles. ATI's Terry Makedon, in his Twitter-feed has revealed that AMD would put forth its "ATI GPU Physics strategy." He also added that the company would present a tech-demonstration of Havok technology working in conjunction with ATI hardware. The physics API is expected to utilize OpenCL and AMD Stream.
226 Comments on AMD to Demonstrate GPU Havok Physics Acceleration at GDC
The problem we have today is that we have the 2 top companys in GPU's acting childish and refusing to just PLAY NICE and support everything.
all they are doing is hurting the customer in this case, seems like intel/nvidia(both acting very childish) are at war constantly as now are intel/amd(intel being childish) and ati/nvidia ofcorse "both acting childish by not supporting eachothers standreds/capabilities.
large/huge companys acting like children=we all loose..........
Its possible because MS and Intel (owner of havok) are BFF's, and read the first post clearly - it is "assumed" this will be done via ATI stream... theres no evidence either way that this cant be tied into DX11 (we'll need to see what OS they run on when they do this demo)
EA for example has distributed it to most if not all of their dev houses(i hate EA but they are without dought one of the largist game publishers out there)
partial list of physx games from wikipedia.en.wikipedia.org/wiki/PhysX
in my experiance PhysX and havoc both have their own advanteges, Havoc has better ragdoll effects, where Physx has FAR FAR FAR better vehicles I am not alone in this opinion, games like Mass Effect show how much better physx is with vehicles then havoc.
havoc vehicles are.......well they feel like toys is what i think when i play havoc games that have them.
Neither is better if you are talking about being well rounded, they both have their own plus's and minuses.
Im betting that nvidia ports their physx driver to support OpenCL in dx11 (and most likely there will be an opencl update/install for all windows versions)
stop this "havoc is better" and "Physx is better" and such, they are both good engines, and fact is that if it wasnt for Intel hardware accelerated phsix(havoc/physx/exct) would already have been here back in the x1900days, Intel payed good money to insure that game developers and even Havoc themselves didnt push to get gpgpu support built in, Intel feels everything's going to be on the cpu and that gpu's are dieing/a dead end, they have said this(mostly because they dont got their own gpu's, just GMA thats still based off the I720 a chipset from when AGP first came out)
blah, damn selfish companys :/
If you're talking about driving up hills and jumping off, Test Drive Off-Road 3 and Hard Truck: Apocalypse are IMO better. Especially HTA. Mass Effect really didn't have anything special/unique in terms of vehicles and/or physics from my perspective.
but there is something confused me how intel and AMD work together
Under DX11, anyone can use it via GPGPU.
Under DX9/10, ATI translates it to work via stream. This would give ATI a 1-2 year head start on Nv, while choosing hte standard more likely to stay in the market long term (since ATI wont do PhysX, its certainly hampering pickup)
Why cant intel code it to work in DX11 GPGPU generically, while ATI works on jsut stream for now?
And btarunr, ms has confermed that dx11 isnt tied to win7 like 10 was to xp, so i dont see the validity of your point, ms will probbly put out 11 for vista at the same time or VERY close to the same time it becomes avalable to win7.
and ms is using OpenCL (google it) for their GPGPU work, neither ati or nvidia are any farther along in gpgpu then the other atm, diffrance is that ati got out gpgpu first but it didnt go anywhere with the 1900 and 2900 cards other then folding, mostly because Intel is thretened by gpu's doing work that was done by cpu's.
now even the opensource community is getting involved with gpgpu, mediacoder, vlc, and many other projects are working on gpu accelerated work, and they would rather beable to pipe it in via OpenCL because it avoids using cuda or stream dirrectly, thus isnt locked into one companys card or the others.
havoc use has droped off alot since Intel bought havoc......alot of companys have been using other solutions or creating their own.
As I stated before, physics in games are only as real as the game developers want them to be. In most cases, physics (beyond the basics) are one of the lowest priority tasks (if not thee lowest priority task) of game development. They can cut all kinds of corners there that increase performance quite dramatically and the player won't even notice unless they are looking for it. That's essentially why the PPU failed--it just doesn't fit the game development paradigm (think cost vs benefit).
For example, Nightfire was far more entertaining to me than Quantum of Solace because Nightfire had very acadish physics (very rapid paced) and player damage schemes (takes 4 shots to the head to kill with the PP9). If you take that arcade feeling out of it, the game becomes boring.
And maybe I'm wrong, but when that happens, it's going to be very interesting, because Ati cards will be able to play PhysX, with AMD's permission or without. The only requirement for PhysX will be a card capable of running DX11 compute shaders then and AMD cards will do so. It will matter very little if AMD wants PhysX on their cards or not, unless they do something shaddy, they will not be able to prevent that.
The one that will win this game is the one that gets better support now and IMHO that's PhysX at the moment.
IMO, it all comes down to accessibility, ease of use etc, and any proprietary app will fail long term.
*slowly backs away from thread*