Ah...now use your logic against yourself:
"you do know they bin chips dont you?"
The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.
but it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.
Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.
just like back in the day the 9700-9800pro/xt was a 256bit card and the 9500pro/9800se(256bit) was 128bit, some old 9500's where just 9700-9800pro/xt with a bios to dissable 1/2 the memory buss and/or pipes on the card( have seen cards both ways ) they also had native pro versions that where 128bit and FAR cheaper to make, less complext pcb's.
blah, that last bit was a bit of a rammble, point beeing that ati's way this time around as they have in the past they found a cheaper more efficent way to do the same job.
gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then 3)
blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.
you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)
example that can show you what i mean would be the k10's, there are quad cores and tricores.
the tricore's are eather weak or failed quads, amd found a way to make money off flawed chips, they still function just fine, but due to complexity of NATIVE quadcore you ARE going to have higher fails then if you went with multi dual cores on 1 die.
in that reguard intels method was smarter to a point(for home user market) since it was cheaper and had lower fail rates(could alwase sell failed duals as lower singel core models) amd even admited for the non-server market they should have done an intel style setup for the first run, then moved native on the second batch.
as i have a feeling nvidia will endup moving to less complex pcb's with gddr5 with their next real change(non-refresh)
we shal see, i just know that price for perf i would take a 4800 over the gt200 or g92, that being if i hadnt bought the card i got b4 the 3800's where out