It doesn't matter if it's a slower gpu. We don't buy gpus, we buy gfx cards. ATI has the fastest video card on the planet.
Well sure, if you eliminate the 'price/performance' badge that so many people seem to wear, especially when discussing ATi.
But the X2 is as much of a 'flop' in that category as the 280 is/was, and this is where the variable of TWO gpus DOES matter.
Two GPUs,
DDR5,
How many shaders again? I can't count that high!
etc.
It boasts no real world advantage to the average consumer, or even some of the not so average consumers. It's a piece of hardware that only 'shines' (and by not that much..) in very acute situations that most people won't encounter.
It also draws 100 more watts, is natively hotter (and two times the heat at that), and costs $100 more(which should be a moot point, but SOMEHOW, price always gets involved whether it's TOP end products or not).
So...
Let's reverse the comparison.
280, single solution
Less power, heat and price.
Neck and neck, and at times, better(slightly) or worse performance (slightly) than the X2, in average comparisons. It falls short 10-25% (is that fair? on average?) to the X2 in acute or synthetic situations.
We could keep going, saying the 4870 is close to the 280, at times, and costs less and etc.etc.
The key difference being, that a 280 has more real world purpose than an X2. Then, from a tech standpoint, the performance of the X2, considering it's horsepower is far from impressive. Tack on the cost, heat, power etc. and it's even less impressive, and therefore just as much of a 'dog' as the 280.
In some ways, I think both sides failed.
Nvidia should have released the 280 as 55nm with better shaders.
ATI shouldn't have bothered with the X2, trying to attain some pointless 'crown,' and rather tried to keep the performance of their 4870/4850, but without giving the finger to heat/power/efficiency etc.
In the end, if a 280 isn't enough for you, then a X2 won't be either. The only real world application that will demand either of these cards is Crysis, more or less, and it's sad how everyone is using THAT as a benchmark, when five minutes before they were bitching about how Crysis is coded so 'poorly.' Yet even in Crysis the X2 will not give you that elusive 60 fps, or even a constant 40-45 - unless you turn things down or off, but then that defeats the purpose. But if you run a tuned custom config for Crysis, then you can get your 45+ FPS with all the eye candy, with EITHER card.
Back to square one we go.
This graph pretty much sums up my understanding and perception of GPUs these days, in that many of them run the majority of 3d applications without fault.
The top two games are popular, modern and have a general requirement in regards to the power needed to run them. They are, average. All cards perform exceptionally well, easily achieving the elusive '60 fps'(or near it) requirement. The bottom two games, are examples of programs that can heavily tax the same GPUs used in the previous games, but are also popular and modern, just not average, hence 'acute.'
Crysis seems self-explanitory. Good choice using Arma, I was hoping someone would. Older engine, but the rules of GPU physics (not physics like PHYSX) still apply. Lots of objects, shaders, long range viewing distance and high resolutions can result in very crippled frame rates. It's interesting how well the 4870 does, but more importantly how well the X2 doesn't.