Tuesday, September 29th 2009
Batman: Arkham Asylum Enables AA Only on NVIDIA Hardware on PCs
Anti-Aliasing has been one of the most basic image-quality enhancements available in today's games. PC graphics hardware manufacturers regard it as more of an industry standard, and game developers echo with them, by integrating anti-aliasing (AA) features in the game, as part of its engine. This allows the game to selectively implement AA in parts of the 3D scene, so even as the overall image quality of the scene is improved, so is performance, by making sure that not every object in the scene is given AA. It seems that in one of the most well marketed games of the year, Batman: Arkham Asylum, doesn't like to work with ATI Radeon graphics cards when it comes to its in-game AA implementation.
Developed under NVIDIA's The Way it's Meant to be Played program, and featuring NVIDIA's PhysX technology, the game's launcher disables in-game AA when it detects AMD's ATI Radeon graphics hardware. AMD's Ian McNaughton in his recent blog thread said that they had confirmed this by an experiment where they ran ATI Radeon hardware under changed device IDs. Says McNaughton: "Additionally, the in-game AA option was removed when ATI cards are detected. We were able to confirm this by changing the ids of ATI graphics cards in the Batman demo. By tricking the application, we were able to get in-game AA option where our performance was significantly enhanced." He further adds that the option is not available for the retail game as there is a secure-rom.
With no in-game AA available to ATI Radeon users, although the features do technically work on ATI Radeon hardware, the only way AA can be used is by forcing it in Catalyst Control Center. This causes the driver to use AA on every 3D object in the scene, reducing performance, compared to if the game's in-game AA engine is used. "To fairly benchmark this application, please turn off all AA to assess the performance of the respective graphics cards. Also, we should point out that even at 2560×1600 with 4x AA and 8x AF we are still in the highly playable territory," McNaughton adds. Choose with your wallets.
Developed under NVIDIA's The Way it's Meant to be Played program, and featuring NVIDIA's PhysX technology, the game's launcher disables in-game AA when it detects AMD's ATI Radeon graphics hardware. AMD's Ian McNaughton in his recent blog thread said that they had confirmed this by an experiment where they ran ATI Radeon hardware under changed device IDs. Says McNaughton: "Additionally, the in-game AA option was removed when ATI cards are detected. We were able to confirm this by changing the ids of ATI graphics cards in the Batman demo. By tricking the application, we were able to get in-game AA option where our performance was significantly enhanced." He further adds that the option is not available for the retail game as there is a secure-rom.
With no in-game AA available to ATI Radeon users, although the features do technically work on ATI Radeon hardware, the only way AA can be used is by forcing it in Catalyst Control Center. This causes the driver to use AA on every 3D object in the scene, reducing performance, compared to if the game's in-game AA engine is used. "To fairly benchmark this application, please turn off all AA to assess the performance of the respective graphics cards. Also, we should point out that even at 2560×1600 with 4x AA and 8x AF we are still in the highly playable territory," McNaughton adds. Choose with your wallets.
353 Comments on Batman: Arkham Asylum Enables AA Only on NVIDIA Hardware on PCs
And if they used a commercial solution, wouldn't they have somewhere in the credits "developed using X brand AA" like physics/audio is basically... ?
Yeah, I'd imagine most high-end cards will run it with CCC AA with no issues... Since Unreal Engine 3 is getting on in years.
Of course I don't have the game (doesn't interest me enough) so I couldn't tell ya for sure. :ohwell:
The other option is supersampling that doesn't require being implemented in the engine. The same frame is rendered 4 times and blended into one (more or less). The quality is better than MSAA, but the performance hit is huge.
Epic didn't implement AA into UE3 for some reason (PS3 can't do HDR+AA)(AA is dificult to implement in defferred engines or whatever reason). The developer behind Batman was not going to implement it, but I supose Nvidia convinced them. That's what TWIMTBP is for. The situation is not usual, most engines have AA implemented.
Also, I already posted a link to THE thread where its OP admitted forcing AA in CCC does not work correctly.
To date PhysX has jusy been nothing but a sales gimmick. PORTAL and HL2 was way better in terms of game play.
PhysX had/has a lot of potential. However, it hasn't even come close to showing the true pontential in games simply because it is proprietary, and not supported on all hardware. So developers have to create a normal game, then just add a few PhysX elements to the game later. Nothing related to gameplay is PhysX related, because it would ruin the game for people without PhysX.
Now if a developer based the game, and gameplay elements on PhysX right from the beginning of developement, we would see some pretty amazing stuff. A lot more realistic environments, fully destructable environments. Imagine CounterStrike, but instead of having to enter a building only through the door, or a window, you can also just blow a hole in the wall and walk in, and not just at certain pre-defined spots in the way, but anywhere in the wall you wanted.
Saddly, we will never see this because it doesn't run natively on ATi hardware. It is clear that nVidia knew this was required to see PhysX really show it's potential, and this is why they wanted to get it up and running on ATi hardware. I'm sure at the time, ATi definitely didn't want this, since they were in-bed with Intel and Havok.
Again where is the beef man?
Besides, you have to be a developer to understand the raw data that's out there. I don't have the ability to translate. All the info you need is in the Physx and Havok SDK's. Download them, and have a go at it.
Not to mention, we haven't even touched on how much faster gpus are at crunching physics numbers vs cpus. It's just common sense that Physx is capable of more. Even if it can only do the same types of Physx, it still can do more of them.
You say Intel went with Havok because Nvidia owns them but I say its because Physx is inferior. I also believe it will soon be dead too. Say what you will but my proof is in practice. Yours is in theory.
And more engines use Physx than you think. Physx also has a cpu based api, just like Havok.
Again, the adoption rate is low because devs don't like to alienate customers. This is nv's fault for sure, for not making gpu Physx run on an open standard, but adoption rates do not in any way prove capabilities. Not to mention, how much longer has Havok been around? That's a pretty piss poor argument, tbh.
And Physx is not necessarily dead either. With the release of OpenCL, all nVidia has to do is port it from CUDA to OpenCL, and it will be alive and well. Whether they do that or not, is a different story. They seem to have pride issues on opening up their API's for maximum exposure.
At any rate, nothing you have mentioned points to Physx having inferior capabilities. You still haven't proven anything either.
Anyway I don't feel Physx is inferior for its capabilities. I feel its inferior due to the way its executed. (Nividia only hardware). What I do believe is its no better than Havok and even when its GPU accelerated I have yet to see something Havok cannot do and has been proven to do. Does it have more potential in theory? Hell yeah but I haven't seen a damn thing yet to justify a dedicated GPU other than some slick marketing by Nvidia.
As for adoption rates just look at Havok vs Physx SINCE physx was first released. I think you'll be surprised.
None of that changes the fact that it's capable of more than any cpu based physics.