In regards to your whole post, the Fermi design better be much faster than ATI's current GPU. ATI's been refreshing the same old chip design for ages now where as NVIDIA's Fermi is based on a "Brand New Architecture" build from the ground up. And new architectures should be extremely faster than what is out today. We are talking about 60% up to 150% better performance improvement over older designs.
Like R600 you mean? Let's compare -40% versus +20% over competition. Hmmm, would you say that R600 architecture was broken, when it's mostly the same architecture that we can find on Cypress? I don't know what exactly should happen, but I do know what usually happens and that is that new architectures usually underperform specially in the performance/watt department, because usually the underlaying architecture (on top of which the shaders and clusters are going to be put) is usually overkill for that generation's performance and "active units". Maybe you don't know how tall your building will exactly be, and you know you are going to add floors over the time, but you better make the foundations well (aka overkill for that 4 story building, but enough for when the building is 64 floors high). Sometimes new architectures are better, they manage to excel because they overcome a serious bottleneck, and that more than makes up for the "unnecessary" foundations, like G80 or Ati 9000 series, but that is not the norm, not at all.
And you know what? Fermi was designed for DX11 and tesselation and it is faster, no, much faster when DX11 is actually used. That's what it was designed to do and that's what it does, but oh I forgot how irrelevant DX11 is now that Nvidia is the better one at that. But ey, I'm biased, me and only me.
What I see is GX 480 regardless of its power & heat issues performs quite miserably overall not to mention it’s a so called Next Gen design. 40nm being problematic? Yes indeed it is but ATI’s HD 5800 series are on the same process and outperforming anything NVIDIA has to offer, so I wouldn’t put the blame on 40nm but rather how dam COMPLEX NVIDIA chose to make Fermi.
Its fine to go nuts with a GPU design but it also has to work right and so far Fermi running 97C and sucking back over 300W of power is not running right.
Personally NVIDIA’s CEO is to blame, because he stubbornly refused to listen to his CTO. But that is a different story.
Regarding performance, it's far far far from performing miserably, you seem to forget that it
is faster than Cypress. Anyhow, I find it funny how strong conclusions people have based on pre-release performance. Seriously, it's amazing, especially considering how in that comparison, Cat 10.3 has been used (not without it's controversy), a driver that suposedly increases performance by as much as 15%, a driver that has been released 6 months after release. And there's been previous drivers with similar improvements, and it's the 4th generation from the same architecture and not a new one at all, all of which have seen their improved drivers. But on the other hand first seen performance based on pre-release drivers from a completely new architecture that functions quite differently than any previous GPU, is etched on stone, with no posibility for an improvement ever. Not only that, but it's a clear sign of architecture failure. Nevermind.
Regarding the manufacturing, of course Nvidia design is complex and it shares its part of the blame, but don't forget that AMD had to retire a 140 mm^2 chip because it couldn't be manufactured and despite being producing Cypress since June, 3 months later, on release, they only had less than 20.000 cards at launch. In the next 4 months, until january they had only managed to produce and sell 300.000 cards. That is the truth, 40nm process is absolutely fucked up. And yes AMD found a
workaround, not a fix, a workaround and they were lucky enough that the workaround worked, although they never managed to fix RV740 and Cypress has sold much less in 6 months than what HD4850 sold in a single month. Nvidia has not been so lucky (I say this with a little irony btw) and on top of that they didn't do as much homework as AMD, but fact still remains that they didn't have to, since that's TSMC's work. That's why foundry companies exist, that's why they do tests and speak with chip designers about what can and what cannot be done and that's why they sign contracts and that's why already 2 years ago TSMC had advertised everywhere on their homepage that 40nm was on schedule, that they espected yields to be above 80% by years end, etc etc etc.