Thursday, May 22nd 2008
Next-gen NVIDIA GeForce Specifications Unveiled
After we already know what AMD/ATI are planning on their camp, it's NVIDIA's turn to show us what we should be prepared for. Verified by DailyTech, NVIDIA plans on refreshing its GPU line-up on June 18th with two new video cards that will feature the first CUDA-enabled graphics core, codenamed D10U. Two models are expected to be launched simultaneously, the flagship GeForce GTX 280 (D10U-30) and GeForce GTX 260 (D10U-20). The first chip will utilize 512-bit memory bus width, 240 stream processors (128 on the 9800 GTX) and support for up to 1GB memory. GTX 260 will be trimmed down version with 192 stream processors, 448-bit bus and up to 896MB graphics memory. Both cards will use the PCI-E version 2.0 interface, and will support NVIDIA's 3-way SLI technology. NVIDIA also promises that the unified shaders of both cards are to perform 50% faster than previous generation cards. Compared to the upcoming AMD Radeon 4000 series, the D10U GPU lacks of DirectX 10.1 support and is also limited to GDDR3 only memory. NVIDIA's documentation does not list an estimated street price for the new cards.
Source:
DailyTech
87 Comments on Next-gen NVIDIA GeForce Specifications Unveiled
Sure, right now there aren't many games for it but that will change and when it does, ATI will be prepared but nVidia won't.
And who cares about 10.1. We still don't have a native dx10 game, and the improvements for 10.1 I'm sure will be minor. I seem to remember a thread here where everyone seemed to think there wasn't much difference between dx9 and dx10. And now everyone's complaining about 10.1. Methinks some would rather find the bad and complain than the good and rejoice.:shadedshu
Indeed, if what has been said about the Shader Processors is true, GT200 is more "new" or "advanced/improved" relative to G92 than RV670 to 770. Making SPs 50% more efficient and faster IS what I call IMPROVED architecture and not adding a GDDR5 memory support that is not going to be used anyway. I could say the same about 512 memory interface though.
What is that has improved so much otherwise? SPs running faster than the core? 50% more of them? Double the TMUs?
No, time for a reality check, guys. There's no innovation in any of the new chips.
(i know my English a bit crap, but i hope you'll understand what i wrote)
Also in DailyTech at the OP link, they say: You would just say "run 50% faster" and not "second-generation" and "perform 50% better" if that was the case. I'm not taking that as a fact. But IMO Nvidia and DailyTech are in the end saying more "efficient". In the other site that I said (and can't remember what is, I read 20+ tech sites each day) they used "efficient" word. If that ends up being true, that's another story.
EDIT: Also it's that I think it's a lot more probable that shaders are more "efficient" (i.e by adding another ALU, I don't know) than shaders running at 2400+ Mhz. The card is still 65nm, correct me if I'm wrong, but 2400Mhz is not going to to happen at 65nm on a reference design.
But they can't use ddr4 because g80 doesn't support gddr4 (and gddr5). I can't explain myself better.
dx10(.1) specs were available to every manufacturer early, i don't think it was a secret in front of nvidia. Even S3 has a dx10.1 card.
About DX10.1 what exactly is "early"? I mean how much early in the scheme of things? I.e 2 months are too much. There are even hints that MS didn't gave Nvidia all the necesary to make their DX10 drivers run well, because they were pissed off with what happened with the Xbox GPU.
If you are not convinced already, think about this: why is Ati's HD4850 going to have GDDR3 memory? Why not even GDDR4? Answers above.
Rumours says there will be a gddr5 version of the hd4850. The 0.8ns gddr4 doesn't make much sense in the light of 0.8ns gddr3, apart from the less power usage. The gddr3 has better latencies at the same clock.
And HD4850 WON'T have a GDDR5 version from AMD. They gave partners the choice to use it. That way partners can decide if they want to pay the price premium or not. GDDR5 price is so high, that AMD has decided is not cost effective for HD4850. Now knowing that it's only an underclocked HD4870, think about GDDR5 and tell me in all honesty that it's not just a marketing strategy.