^^ I agree with all that... but there is a slight problem... where is this mythical unicorn we call gf100/gt300/ w/e..?
The official word is still late November for the release, with availability in the end of Q4. There's no sense doubting that for the time being IMO. No info means they want to keep it secret as much as they can so that AMD has to finalize HD5890 (or whatever card) on their own and not based on being faster than any of Nvidia cards.
So IMO they will not show it until review time or until AMD shows what the HD5890 will be, if latter is going to be released this year. Nvidia still has to decide the final clocks, and probably want GTX380 to be faster than the supposed HD5890, while GTX360 is faster than HD5870 to repeat what happened the last two generations. AMD on the other hand is probably waiting until Nvidia reveals Fermi performance or final specs before they finish HD5890 specs. Right now Nvidia is on a very big advantage, because if Fermi has the same OCing headroom as any previous Nvidia chip (15-20% OC), Nvidia can play a lot with the place their cards will take on the stack. The same could have happened with GT200 if it released after HD4xxx. GT200 OC headroom was a 20% on stock cooling, while HD48xx was around 10%. If Nvidia had released GT200 after RV770 launch they could have decided to release the cards with a 10% higher clocks, making the GTX260 10% faster than HD4870 while having the same OC potential. The story would have been very different than it was. Remember that Ati hid the fact that RV770 had 800 SPs instead of the rumored 640 until the last minute and it's that what cought Nvidia off-ward.
Now, HD58xx doesn't overclock particularly well (not better than previous Ati generations), so if Fermi retains Nvidia's track record of OC potential*, that's a very powerful weapon they have, if they need it.
* And that's a very long time, since GF6800 days, always always around 20% OC headroom, something that IMO is not casual and is designed so that their partners can release 10% OC cards that still have a bit of OCing potential. That makes partners happy, because it's a marketing weapon they can use.
EDIT: I've made this chart for me and decided to share. It's all speculation, but I think it's very realistic.
My conclusions based on the chart:
1. Nvidia can relax their harvesting strategy from 2 clusters (GTX360) to 3 (GTX350) greatly improving yields and still be competitive with HD5870 and HD5850 (GTX320).
2. That's assuming that Nvidia Fermi architecture suffers the same efficiency hit as Hd5xxx cards. Based on past generations (G92-to-GT200), Nvidia managed to "double" performance to same degree that AMD did, but while AMD used 2.5x the number of SPs (and TMUs), Nvidia did only 1.87x the SPs and 1.5x the TMUs. HD5870 with twice the units only manages a 40-50% improvement over the equally clocked HD4890 so the efficiency is ~75%. Now we are talking about a 2.16x increase in SPs for Fermi, add to that a possible higher efficiency of... let's say 80-90% (less than G92/GT200) and we would have a much much faster architecture.
3. That's assuming 1.5 Ghz SPs on all cards, which IMHO is a little bit conservative. Especially smaller cards shouldn't have a problem reaching 1.6-1.8 Ghz on 40nm. I'd expect the same kind of improvement in clocks as AMD when moving to 40nm. If we compare HD5870 to HD4870 and assume that they have a 1000 Mhz HD5890 up their sleeves that's a 15% jump. GTX285 shaders run at ~1500 and that
could mean that Nvidia could reach 1725 easily.