Friday, March 20th 2009
AMD to Demonstrate GPU Havok Physics Acceleration at GDC
GPU-accelerated physics is turning out to be the one part of specifications AMD is yearning for. One of NVIDIA's most profitable acquisitions in recent times, has been that of Ageia technologies, and its PhysX middleware API. NVIDIA went on to port the API to its proprietary CUDA GPGPU architecture, and is now using it as a significant PR-tool apart from a feature that is genuinely grabbing game developers' attention. In response to this move, AMD's initial reaction was to build strategic technology alliance with the main competitor of PhysX: Havok, despite its acquisition by Intel.
In the upcoming Game Developers Conference (GDC) event, AMD may materialize its plans to bring a GPU-accelerated version of Havok, which has till now been CPU-accelerated. The API has featured in several popular game titles such as Half Life 2, Max Payne II, and some other Valve Source-based titles. ATI's Terry Makedon, in his Twitter-feed has revealed that AMD would put forth its "ATI GPU Physics strategy." He also added that the company would present a tech-demonstration of Havok technology working in conjunction with ATI hardware. The physics API is expected to utilize OpenCL and AMD Stream.
Source:
bit-tech.net
In the upcoming Game Developers Conference (GDC) event, AMD may materialize its plans to bring a GPU-accelerated version of Havok, which has till now been CPU-accelerated. The API has featured in several popular game titles such as Half Life 2, Max Payne II, and some other Valve Source-based titles. ATI's Terry Makedon, in his Twitter-feed has revealed that AMD would put forth its "ATI GPU Physics strategy." He also added that the company would present a tech-demonstration of Havok technology working in conjunction with ATI hardware. The physics API is expected to utilize OpenCL and AMD Stream.
226 Comments on AMD to Demonstrate GPU Havok Physics Acceleration at GDC
There was talks of Havok FX (Havok on GPU) a long time ago but it never happened. Havok just appears to be in limbo. Intel doesn't want to do anything with it because of the issues with Larrabee. They are just expanding the libraries to do more stuff like AI pathfinding and cloth.
But think about it: what is Havok's speciality? Making physics code believable but not very intensive so it can run on the CPU without causing problems. There really is no market for them to make a more complex physics engine (like PhysX) that increases the hardware burden substantially and the end user can't tell a difference. Havok is havok--works great and is hardware friendly.
Maybe Intel, Havok, and AMD discovered this so Havok is now just minding their own businesses improving on their already successful product?
Oh an no Fermi. To hot for my region. I have to wait until the next gen.
Lets also not forget that NVIDIA is the reason DX 10.1 exists (DX 10.1 features were intended to be part of DX 10 but NVIDIA couldn't make Microsoft's deadline so they had to release DX 10 and later DX 10.1 with the features NVIDIA couldn't support but AMD could) and even then, it took them years to finally adapt it. NVIDIA was also late in releasing DX11 parts by about half a year.
Oh, and the obscenely high failure rates on GeForce 8 series cards. :(
All of the above are the reasons I went back to AMD. If Intel did that, Havok would be like PhysX with rare implementations. Intel won't do that for the sake of keeping Havok a viable company.
TWIMTBP does not cripple ATI hardware, PERIOD. It just means nVidia took the time to give help to the dev to get their hardware optimized. Optimizing for nVidia is not the same as crippling ATI. ATI has the same opportunities to offer dev help, but usually choose not to. How is this, in any way, "evil" nVidia crippling ATI?
PS: Sorry I sound snippy, tig. It's not meant to be personal, it's just a frustrating topic to see always popping up. Except that for the past 1 1/2 years, it's been ATI with the more bug laden drivers. nVidia's turn will come back around again tho. Both companies go back and forth on driver quality, just like they go back and forth in performance.
And the failure rate on 8 series cards did not seem that high to me. Certainly not much different than ATI. Unless you meant the big defective batch of mGPUs?
Nonsense.
www.bit-tech.net/bits/interviews/2010/01/06/interview-amd-on-game-development-and-dx11/1
Besides, nVidia still does it more, and has stronger ties to more devs. ATI (and AMD in general, actually) does not push itself as hard in the market as they could, especially compared to their competitors.
PS: ATI is launching their answer to TWIMTBP this year, from what I understand. So we may be seeing more of them in the dev process of games. It's about damn time, too.
And him claiming that it worked perfectly fine on ATI hardware is a bold faced lie. It was shown that it doesn't work properly on ATI, even if you spoof it. There were screenshots all over the place proving it when the game released. ATI did not apply AA on shadows and other weird anomalies. They may have since fixed it in drivers, but at launch it was, in fact, broken. And it was not nVidia's job to get it working on ATI hardware either. If the would've left it unlocked, they would've caught hell for it being broken, and we still would've seen people claiming they did it on purpose. They were damned if they did, and damned if they didn't.
forums.techpowerup.com/showthread.php?t=119242
edit: read the thread.
The fact you need to FORCE AA, means that the game does NOT "offer" AA on ATI drivers - it just means there is a way to make it work, even if it is performance heavy.
Nvidia get an AA mode that only applies to whats neccesary - ATI are forced to use a generic AA profile that takes a large performance hit (applies AA to unnecessary elements) - i dunno about you, but its very clear to me ATI was blocked and had to resort to workarounds to even get this slower AA method working.