Sunday, April 27th 2008

NVIDIA GeForce 9900 GTX and 9900 GTS GT200 Powered Graphics Slated for July Launch
NVIDIA is prepping the already confirmed GT200 graphics chip for an early July launch several sources reported last week. In order to counter ATI and their RV770 GPU which is almost complete, NVIDIA will launch GT200 powered GeForce 9900 graphics cards hoping to take the performance crown from ATI once again using old but proved tactics. The first two models from the GeForce 9900 series will be 9900 GTX and 9900 GTS instead of the previously speculated GeForce 9900 GT.
Source:
theINQ
91 Comments on NVIDIA GeForce 9900 GTX and 9900 GTS GT200 Powered Graphics Slated for July Launch
surely a 3870X2 beats a G92GT/S, but again, not an even comparison. when we go even stevens (G92 SLi vs 3870X2 or 3870CF) then we have a reasonable comparison.
it should be 9900GTX vs 4870
and 9900GTS vs 4850
maybe 9900GTX/S SLi vs 4870X2, thats acceptable, but it is completely unfair to compare single to double.
the only possible way they could compete together is if they both sell at the same pricepoint, i.e. 9900GTX and 4870X2 both retail at $499
wolf.
- 96 TMU? Seems right. Pair it with clock increase for a two fold increase in performance.
- 384 SP? - Don't think so, not if 1. is true. G92 has 2 SPs per TMU, this is 4 per TMU, balance is broken IMO. Unless they do have other uses for them in mind. Ageia physics? It's very unlikely anyway. Another possibilitily is that they are doing like Ati and in reality it's 192 SPs with 2 ALUs each (one complex one simple), or 96 with 4 each. Unlikely for the latter, but 192 x 2 is possible. It's also possible that the TMUs unlike in G80/92 are capable of FP16 like Ati ones. VERY UNLIKELY, paired with SP number would make this card 3X faster per clock than G92. That's insane and won't happen, right?
- 64 ROPS? Definately false. 32 ROPs maximum. 24 would suffice IMO.
- 512 and 448 bit memory interfaces? Could be but it's too much bandwidth IMO even when still using GDDR3. 384 and 320 would suffice IMO. Unless they have the massive physics thing in mind and need that extra bandwidth, which could make sense.
All in all, the specs are plausible, but some of them seem too high to be true. It would be a monster chip, some leaked specs said 1,8 billion transistors and you do need all of them for that specs to be true and that doesn't seem profitable. Not to mention that such a chip would be almost 3x as fast as G92 or have too many spare shaders.I can see it already 9900's come out and a few weeks later 10,000 series will be in the rumor mill and the cycle will continue :banghead:
So . . . by that rule of thumb - we can figure a release of ATI's new goods around June-July; and nVidia's new release around Sep-Oct . . . which sounds about on par when both companies tend to release a new series of cards.
IMHO, this news is really just to start drumming up the support, and forum banter that will build the consumers expectations and start the anticipation process. There's no concrete info, per se, and if we make out the new green camp hardware to sound bettern than ATI's already acknowledged HD4000 series, there's gonna be a ton of people who'd hold off on purchasing a new ATI card until they see the leaked benchmarks and specs of nVidia's newest demons - and I figure, too, we'll start seeing those leaked specs and benchies the closer we get to ATI's release.
So, let's all dry out panties off until we get some solid info.
In that time frame we've seen a plethora of cards launched from both camps, even if some qualify as 'rehash.' Never the less, leadership has primarily belonged to Nvidia.
It's a point that even ATI's stance on controlling the non-high end market has been failing. Which further supports the observations that Nvidia doesn't squirm no matter what ATI does.
Even if ATI pulls off some card that out does the current Nvidia 'top dog,' it won't do it by much, and it definatley won't last long.
Hence these 9900s probably won't be anything to kill for, but could be a plentiful upgrade for some users, whether ATI or Nvidia owners previously.
Finally, bringing it to my last point: Right now, and when the 9900s launch, the 9800 series are and will be at a good price, a very good price. I know there's people considering a 9800 card, but are worried it would be premature. It wouldn't be 'stupid,' to get a 9800 now.
If these 9900s aren't going to be that spectacular, I'd recommend the 9800
It seems to me that NVIDIA has a generally more efficient architecture right now, and historically until you get back to NV3x.
What's most amazing is how R420 and RV670 are both 16 pixel/clock GPUs. Granted, they've gotten more efficient with those resources, but NVIDIA is way beyond that.
Anyhow, I can't say nVidia has a "more efficient" GPU architecture . . . nVidia has a long running reputation for running their cards at insane speeds, and having cards that run at thermonuclear temperatures . . . not ideal for those concerned about power consumption, and in many regards, nVidia hasn't made any drastic changes to their current GPU architecure in quite some time, whereas ATi has gone back to the drawing board on a fe occasions. But, both companies' GPUs accel at certain tasks than their competitors. Big reason why during the Phys-ex Bout Part I, ATI layed waste to both nVidia, Aegia and Intel. ATI's GPUs are 1337 at mathematics, and with graphics engines that revolve around this as well. Sadly, there aren't many games like that, so nVidia takes the cake near about always. But the few games that are coded in a way that puts ATI on top - it tends to be a massive margin over nVidia.
I agree with Wile E's statement, every few years the crown gets swapped, and ATI is due to take the lead again. Based on the leaked specs of the HD4000 series, it appears that ATI intends to address the issues that were holding back the RV670 in performance.
Either way, the competition between the two this year in regards to the 4000/9900 series will be very close. The second half of this year will be very intriguing, and I'm damn-straight looking forward to it :rockout:
I think that the fact that NVIDIA wins in just about every application out there says volumes about their architectural efficiency. R600 was anything but efficient compared to G80. All that touted shader power that went nowhere, pathetic AA performance, excessive memory bandwidth, and extreme heat combined to make a product less power-efficient, slower, and uglier than the competition. And now we have HD 3850/3870 outclassed by 8800GT, 8800GS and 9600GT. 3870X2 has some serious quirks to it (as does 9800GX2 though). At least RV670 was frugal on the juice. Physics on GPUs has gone absolutely nowhere aside from forgotten promises from NV & ATI. And there are synthetic tests out there that show ATI's shader design to be less efficient that NV's in multiple ways as well. Check out some of Digit-Life's reviews to see some signs of this. You'll find that while it can perform extremely well in some cases (geometry), it gets battered badly in others (SM4).
www.digit-life.com/articles3/video/rv670-part2.html
How good at math they are in some cases hardly matters when they are dramatically behind in texture fill-rate anyway. The chips are just totally off balance. I'm worried about RV770 after seeing how it may have 16 ROPs still. That means a max of 16 pixels per clock output. Indeed. I like the ultra-agressive mid-range products we're getting. But, I think that NVIDIA has a monster GPU in the works while ATI is going to rely on dual RV770s for the top-end. I doubt that ATI can corner the high-end with a dual GPU design if NVIDIA does indeed have a 1.3 billion transistor chip coming. They just don't work out reliably/efficiently in all apps, as shown by current dual GPU cards and SLI/CF. A big single chip board doesn't have these driver issues and is going to be innately more efficient than two GPUs communicating externally.
True, for the most part a GPU being good at math has little to do with texturing performance - but, again, if ATI were to spend more time collaborating with game devs to the extent that nVidia has, it would make all the difference in the world - unless ATI just decides to revamp the architecture, which is what RV770 so far will be doing. Sure, it might only have 16 ROPs, still, but if everything is clocked independantly, I don't think that will be a limitation at all . . . only time will tell, though.
[/QUOTE]Indeed. I like the ultra-agressive mid-range products we're getting. But, I think that NVIDIA has a monster GPU in the works while ATI is going to rely on dual RV770s for the top-end. I doubt that ATI can corner the high-end with a dual GPU design if NVIDIA does indeed have a 1.3 billion transistor chip coming. They just don't work out reliably/efficiently in all apps, as shown by current dual GPU cards and SLI/CF. A big single chip board doesn't have these driver issues and is going to be innately more efficient than two GPUs communicating externally.[/QUOTE]
We haven't really seen anything "high-end" from ATI since they exclaimed they're staying out of that market. Sure, the 3870x2 is priced for that market, but it's more two 3870s in one package, which still doesn't qualify for high-end, IMO. Although, if ATI get's back on the ball like they were during the X1900 series, watch that statement go out the window - I'm sure we'll see another 1337 card come from the red camp, but only if they feel it can compete with nVidia's 1337 beast.