Friday, March 19th 2010
NVIDIA Claims Upper Hand in Tessellation Performance
A set of company slides leaked to the press reveals that NVIDIA is claiming the upper hand in tessellation performance. With this achievement, NVIDIA is looking to encourage leaps in geometric detail, probably in future games that make use of tessellation. NVIDIA's confidence comes from the way its GF100 GPU is designed (further explained here). Each GF100 GPU physically has 16 Polymorph Engines, one per streaming multiprocessor (SM) which helps in distributed, parallel geometry processing. Each Polymorph Engine has its own tessellation unit. With 15 SMs enabled on the GeForce GTX 480 and 14 on the GeForce GTX 470, there are that many independent tessellation units.
NVIDIA demonstrated its claims in the presentation using the Unigine Heaven, where the GeForce GTX 480 was pitted against a Radeon HD 5870. In many scenes where tessellation is lower, the GPUs performed neck-and-neck, with the GTX 480 performing better more often. But in scenes with heavy tessellation (particularly the "dragon" scene, where a highly detailed model of a dragon needs to be rendered with densely tessellated meshes, the GTX 480 clocks nearly a 100% performance increment over the HD 5870. NVIDIA has been confident about the tessellation performance back since January, when it detailed the GF100 architecture. The GeForce GTX 400 series graphics cards will be unveiled on the 26th of March.Images Courtesy: Techno-Labs
NVIDIA demonstrated its claims in the presentation using the Unigine Heaven, where the GeForce GTX 480 was pitted against a Radeon HD 5870. In many scenes where tessellation is lower, the GPUs performed neck-and-neck, with the GTX 480 performing better more often. But in scenes with heavy tessellation (particularly the "dragon" scene, where a highly detailed model of a dragon needs to be rendered with densely tessellated meshes, the GTX 480 clocks nearly a 100% performance increment over the HD 5870. NVIDIA has been confident about the tessellation performance back since January, when it detailed the GF100 architecture. The GeForce GTX 400 series graphics cards will be unveiled on the 26th of March.Images Courtesy: Techno-Labs
145 Comments on NVIDIA Claims Upper Hand in Tessellation Performance
You want to know why BF:BC2 does not support FSAA in DX9? Because they simply port that from the console which couldn't afford FSAA. :slap:
DX11 actually is taking on much better than DX10, for the very lease there are games that support DX11 within the first 6 months the hardware is released.
DX10 fell flat on its face mainly because how people hates Win Vista, and the fact that nVidia released pussy mid-range 8600GTs that can't even run DX9 games maxed out.
The point about FarCry2 not looking much better in DX10?
That games is base on the Dunia engine which is a modified CryEngine, and it is a DX9 engine with added DX10 support.
The Halo 2 Engine is mainly designed for the Xbox that makes it an even worst example.
By the way Halo 2 is released in 2004, that is way before Vista even exists. These are DX9 native games.
Most important thing is this is a thread about the new DX11 GPU from nVidia, not some Console vs PC thread.
Mussle's earlier post was to point out that nvidia's 3DVision has too many limitations, and that includes FPS/refresh rate issues in DX10/11 those have nothing to do with Vsync
However a 6 monitor setup can be wrapped around your field of vision (or 3), and lets you see more of whats going on in games. Especially more useful in racing games, where in a normal 16:9 ratio, you'd rely on a button to check who's beside you, whereas in a multiple monitor setup you could just duck your eye to the side quickly without being distracted by the button.
The difference is that multiple monitors actually facilitate gaming, whilst 3D doesn't do anything but create an illusion.
You may find your 23 inch screen big enough but others who want more than just eye candy would object.
www.youtube.com/watch?v=X6jYycRmWz4
3D, however, looks amazing as everything jumps out at you. I don't play like that daily. Just once in a while to get a treat or to show it off. An ultra highrez monitor shrunk to 23" that supports 2560 x 1600 would be awesome cause there would be no need for FSAA. Until then, a 56" DLP would probably take up more real estate as a 6 monitor setup, consume less power, and be cheaper. It could also be used as a regular TV, monitor, and have no thick bezels inbetween. It could also do 3D if I please.
But whatever, to each their own.
imo its useless in FPS games though.
Anyway back on OT.
I hope it's proven correct when these cards are released and at a competitive price..
ok gonna go ride my unicorn across the river of chocolate now...
We need to see reviews from other sources not from Nvidia !!
Of course Nvidia will show it is winning in their benchmarks even if they are losing to ATI..
Anyway 6 days to go and we'll see who is the KING OF HELL !!
Newer games are developed with DX10 and 11 in mind, just because you don't care about quality doesn't mean quality does not matter.
There is a increase in quality from DX9 to DX10 and also to DX11 (with tessellation which is the topic), but significant or not is your opinion.
The point of whole nVidia's "3D Vision" is also to in a way increase quality "to the eye", if you don't care about that why bother posting?
Eyefinity (the "stupid multi-monitior gaming" as you put it) on the other hand is a pretty much fail safe feature.
It does not require a high refresh-rate monitor, ultra high-end hardware or expensive shutter glasses.
I think I have gone off-topic far enough, I will leave it here.
customyet to be released to public version of Heaven Benchmark, v1.1) or is this an "OK" result where they did both benchmarks (on both cards, on the same day) using the same version of the software?multi monitor?Eyefinity? if it's not good, why nvidia copy it? why they bother to release Nvidia Surround
Someday, Samsung, LG, Dell or other LCD maker will release a monitor which its bezel can be taken off for the purpose of multi monitor
FERMI will be out next week, and i think ATI prepare 5890 to counter it, that's why ATI prohibited its partner to release some high-clocking 5870, because it is served for 5890
nVidia = fail.
I do hope the card is not a complete bust though, b/c I want some price wars.
PRICE WARS = Crossfire or ever Tri-fire 5870's for me :rockout::rockout::rockout::rockout:
Comparing overclocked results to stock results in discussions like this always makes the person look silly.
which just so happens to be about 90% of benchmarks on the web..That is unless it's showing the difference between stock and a OC
Its true ;)
the performance gains just arent very high, considering how much more powe this card uses at load.