Friday, March 20th 2009
AMD to Demonstrate GPU Havok Physics Acceleration at GDC
GPU-accelerated physics is turning out to be the one part of specifications AMD is yearning for. One of NVIDIA's most profitable acquisitions in recent times, has been that of Ageia technologies, and its PhysX middleware API. NVIDIA went on to port the API to its proprietary CUDA GPGPU architecture, and is now using it as a significant PR-tool apart from a feature that is genuinely grabbing game developers' attention. In response to this move, AMD's initial reaction was to build strategic technology alliance with the main competitor of PhysX: Havok, despite its acquisition by Intel.
In the upcoming Game Developers Conference (GDC) event, AMD may materialize its plans to bring a GPU-accelerated version of Havok, which has till now been CPU-accelerated. The API has featured in several popular game titles such as Half Life 2, Max Payne II, and some other Valve Source-based titles. ATI's Terry Makedon, in his Twitter-feed has revealed that AMD would put forth its "ATI GPU Physics strategy." He also added that the company would present a tech-demonstration of Havok technology working in conjunction with ATI hardware. The physics API is expected to utilize OpenCL and AMD Stream.
Source:
bit-tech.net
In the upcoming Game Developers Conference (GDC) event, AMD may materialize its plans to bring a GPU-accelerated version of Havok, which has till now been CPU-accelerated. The API has featured in several popular game titles such as Half Life 2, Max Payne II, and some other Valve Source-based titles. ATI's Terry Makedon, in his Twitter-feed has revealed that AMD would put forth its "ATI GPU Physics strategy." He also added that the company would present a tech-demonstration of Havok technology working in conjunction with ATI hardware. The physics API is expected to utilize OpenCL and AMD Stream.
226 Comments on AMD to Demonstrate GPU Havok Physics Acceleration at GDC
1. That it will never reach the heighs that GPU physics would reach. GPUs are always going to be faster (at number crunching) than any x86 based CPU/GPU for the same die area and power consumption. In any case you would need to buy an Intel GPU or AMD start making them, because CPUs will never hold enough power into them. It's not cost effective to add so many ALUs into a CPU, when most of the times they will be idling. And high-end on die GPUs so that you don't need a discrete one (hence the ALUs would be working most of the time), will never happen, because of thermal constraints.
2. Because of the growing importance of physics in games if Havok is used and it is x86 based, the better gaming solution would be an Intel GPU, because at Graphics+Physics it would be faster, although it would be lagging far behind at pure graphics, and would also be lagging in physics if Havok wasn't used and true GPUs from Nvidia and AMD could take advantage of PhysX instead of being forced to adopt Intel's path.
Not to mention that I think it's time for everybody to open the eyes and see that AMD has nothing to do against Intel when it comes to x86, it's ages ahead and I don't think AMD will ever catch on. They will continue being competitive because that's what most benefits Intel, but will never be ahead again.
I'm sorry for you, really. Just out of curiosity, how much did you pay for those stocks? I was close to buy 4000€ in AMD stock around 18 months ago for $7-8 or so I think, and I'm so happy that I didn't... uff I dodged that one by a hair. :ohwell:
FYI me owning a part of AMD has nothing to do with my views. I used to own an 8800 on my old gaming system ;)
Yes, AMD made a step in the right direction, and they are now competitive in their bracket, but I doubt they'll pass Intel up in performance any time in the next couple of years. Intel has a much larger R&D budget.
there you have it, havok cloth and havok destruction accelerated by the gpu over open-cl.
i dont see it taking them that long honestly, i would bet nvidia is already working on the project maby only a couple people but fact is if they get started early they can put it out either early or on the day dx11 is avalable.
also from what i have read this is NOT TIED TO dx11 shaders, so you wont have to replace your 8800+ or hd2k+ card to use it(hell even the x1900/1950 are acctualy capable of this kinda work)
The prospecs is exciting to me, if only both companys would grow up and realise they need to team up against intel (main company driving this "keep everything on the cpu" BS)
IF ati/amd and nvidia could work togather in at least a limmited fassion they could take some of the wind out of intels sails, i mean it wouldnt be perfect but at least amd/ati and NV onboard could be used as a ppu, think about it, again not "optimal" but if all boards had onboard that could support physx(780/790gx and nvidia equivlants),
blah......just blah!!
I would bet you start seeing other phisics engines moving to support opencl as well, it just makes sence to support it if avalable.
for nvidia pushing physx would have no downside really, even if its allowing support over opencl, their cuda based implementation would be more optimzied anyway, after all they own the engine, they could make sure it just runs better on their cards then on opencl based cards.
I will say, Physx is actually one thing I miss about owning an nv card. But then again, I really like GRAW and GRAW2 a lot.
Nv physx, that only works on G92 and newer cards
Havok, that works on ATI cards (2k and up)
Now its a tough choice - comes down to whatevers easiest to code, or cheapest to use. If one of them goes openCL and becomes viable on both brands, that engine takes the lead - no alienating your customer base.
as to havok vs physx, they both got their flaws and advanteges, as i have said b4, As I understand it nvidia is giving physx away
developer.nvidia.com/object/physx.html
infact they are!!!
physx has gone "licence free" this makes it more attractive to developers then havok thats owned by intel and thus requiers them to get a licence from intel to use/support it.
screwup on my part, i was getting confused because i've been using coreAVC, and its G92 and up.
www.guru3d.com/article/geforce-gtx-275-review-test/7
Ambient Occlusion
www.guru3d.com/article/geforce-gtx-275-review-test/6
AO perf hit in games i have tested it in has been pretty low on my 8800gts 512mb as long as im using the 185.66 drivers, older drivers the hit was higher, would guess that the perf hit will go down even more as the drivers mature more :)
WoW for example looks amazing with AO enabled, shadows acctualy are NICE, NwN2 looks better, as well as some other older games I have tested.
at a guess, game devs are holding off til ATI and Nv have working Direct compute/openCL drivers