Thursday, May 22nd 2008

Next-gen NVIDIA GeForce Specifications Unveiled

After we already know what AMD/ATI are planning on their camp, it's NVIDIA's turn to show us what we should be prepared for. Verified by DailyTech, NVIDIA plans on refreshing its GPU line-up on June 18th with two new video cards that will feature the first CUDA-enabled graphics core, codenamed D10U. Two models are expected to be launched simultaneously, the flagship GeForce GTX 280 (D10U-30) and GeForce GTX 260 (D10U-20). The first chip will utilize 512-bit memory bus width, 240 stream processors (128 on the 9800 GTX) and support for up to 1GB memory. GTX 260 will be trimmed down version with 192 stream processors, 448-bit bus and up to 896MB graphics memory. Both cards will use the PCI-E version 2.0 interface, and will support NVIDIA's 3-way SLI technology. NVIDIA also promises that the unified shaders of both cards are to perform 50% faster than previous generation cards. Compared to the upcoming AMD Radeon 4000 series, the D10U GPU lacks of DirectX 10.1 support and is also limited to GDDR3 only memory. NVIDIA's documentation does not list an estimated street price for the new cards.
Source: DailyTech
Add your own comment

87 Comments on Next-gen NVIDIA GeForce Specifications Unveiled

#26
HTC
IMHO, not having Dx10.1 is a shot in the foot, in the long run.

Sure, right now there aren't many games for it but that will change and when it does, ATI will be prepared but nVidia won't.
Posted on Reply
#27
PVTCaboose1337
Graphical Hacker
Caboose senses some fail. No DX 10.1... fail. GDDR3? Fail...
Posted on Reply
#28
farlex85
I wouldn't say fail. Nvidia is getting a 512-bit bus, Ati is going for gddr5 memory. Both expensive ways to increase bandwidth, and both will be great. Whats strange to me is nvidia keeps making strange memory amounts. 896mb of memory? Odd, I'm sure the math works out though.

And who cares about 10.1. We still don't have a native dx10 game, and the improvements for 10.1 I'm sure will be minor. I seem to remember a thread here where everyone seemed to think there wasn't much difference between dx9 and dx10. And now everyone's complaining about 10.1. Methinks some would rather find the bad and complain than the good and rejoice.:shadedshu
Posted on Reply
#29
Valdez
largonI'm talking about the DX10.1 implementation in the game, not in SP1. DX10.1 code in AC causes problems with nV GPUs. That's why it was removed by Ubisoft.
I never heard about it. I'm sure this thing would be mentioned in the hardocp article, but it's not, so i don't think it's true.
Posted on Reply
#30
Animalpak
Guys i found new pics this is how he looks the GTX280









Posted on Reply
#31
DarkMatter
ValdezThe gt200 is not new it's just an improved g80. The memory controller in g80 is not flexible, so they have to use gddr3 in gt200 too.
It's because GDDR5 supply won't be enough for both companies. It's not even enough for Ati, indeed they droped it from HD4850 AND reduced HD4870's frame buffer to 512 because of this same thing. If Nvidia tried to fight to get GDDR5 too, prices would go up >>> worse for consumers.
Posted on Reply
#32
DarkMatter
GT200 is as much a "new chip" as RV770 is. There's nothing new in RV770 that there isn't in RV670 besides GDDR5 support. And that means nothing, it's just e-penis and marketing.

Indeed, if what has been said about the Shader Processors is true, GT200 is more "new" or "advanced/improved" relative to G92 than RV670 to 770. Making SPs 50% more efficient and faster IS what I call IMPROVED architecture and not adding a GDDR5 memory support that is not going to be used anyway. I could say the same about 512 memory interface though.

What is that has improved so much otherwise? SPs running faster than the core? 50% more of them? Double the TMUs?

No, time for a reality check, guys. There's no innovation in any of the new chips.
Posted on Reply
#33
Valdez
DarkMatterIt's because GDDR5 supply won't be enough for both companies. It's not even enough for Ati, indeed they droped it from HD4850 AND reduced HD4870's frame buffer to 512 because of this same thing. If Nvidia tried to fight to get GDDR5 too, prices would go up >>> worse for consumers.
I don't think so. If g80/g92/gt200 memcontroller could use ddr5 it's obviously it could use gddr4. But we didn't see g80 or g92 with gddr4. (i know gddr4 isn't faster than gddr3, but if g80 could use ddr4 we would've seen that already, just like the 2gb 9600gt, it doesn't make any sense, but many people don't know that. It's just marketing.)

(i know my English a bit crap, but i hope you'll understand what i wrote)
Posted on Reply
#34
largon
ValdezI never heard about it. I'm sure this thing would be mentioned in the hardocp article, but it's not, so i don't think it's true.
In the beginning, everything looked perfect. The DX10.1 API included in Assassin’s Creed enabled Anti-Aliasing in a single pass, which allowed ATI Radeon HD 3000 hardware (which supports DX10.1) to flaunt a competitive advantage over Nvidia (which support only DX10.0). But Assassin's Creed had problems. We noticed various reports citing stability issues such as widescreen scaling, camera loops and crashes - mostly on Nvidia hardware.

(...)

So, what is it that convinced Ubisoft to drop the DirectX 10.1 code path? Here is the official explanation:

“We’re planning to release a patch for the PC version of Assassin’s Creed that addresses the majority of issues reported by fans. In addition to addressing reported glitches, the patch will remove support for DX10.1, since we need to rework its implementation. The performance gains seen by players who are currently playing Assassin’s Creed with a DX10.1 graphics card are in large part due to the fact that our implementation removes a render pass during post-effect which is costly.”
www.tgdaily.com/content/view/37326/98/
Posted on Reply
#35
CDdude55
Crazy 4 TPU!!!
Might have to pick me up one of those new nvidia cards. First i need my stimulus check come. I also don't care if my CPU bottlenecks it. I will still the the raw performance.;)
Posted on Reply
#36
DrPepper
The Doctor is in the house
AnimalpakCompletley wrong.


GT200 is a FULL new GPU, and the GDDR3 works alright better than the GDDR5.At the end you get the same results but the GDDR3s they are more exploitable.


The differences betwheen DX10 and DX10.1 are least ! The games have just begun to use the DX10s and they are little of it !!
He said it SOUNDS better on paper, How do you know GDDR3 works better than GDDR5 when it hasn't been implemented yet on a card, also if GDDR3 is not better than GDDR4 would it not be right to assume that GDDR3 isn't better than GDDR2 and because all games don't use 10.1 doesn't mean Nvidia shouldn't be innovative and implement it because soon all games will adopt it like directX 9.0c.
Posted on Reply
#37
DarkMatter
ValdezI don't think so. If g80/g92/gt200 memcontroller could use ddr5 it's obviously it could use gddr4. But we didn't see g80 or g92 with gddr4. (i know gddr4 isn't faster than gddr3, but if g80 could use ddr4 we would've seen that already, just like the 2gb 9600gt, it doesn't make any sense, but many people don't know that. It's just marketing.)

(i know my English a bit crap, but i hope you'll understand what i wrote)
I can say the same taht I said with GDDR5 plus GDDR4 has proved to not be better than GDDR3. So why use it if it's not for marketing? GDDR3 is as good and it's cheaper, and so is that what you use. There's nothing like incompatibility, they could use it if they wanted, but I'm sure they would have to pay royalties for a performance gain that doesnt exist. Same for GDDR5 and DX10.1. People like to mention "conspiracy" theories about TWIMTBP, so I'm going to say one that I have been thinking of for some time about DX10.1 and why Nvidia doesn't want to implement it. There are many "hints" out there that suggest me that MS and Ati developed DX10.1 (even DX10) specifications together. And is very likely that Ati filled many patents about it's imlementation in hardware long before Nvidia even knew anything about how DX10 was going to be. As some have suggested DX10.1 is what DX10 was going to be before Nvidia did their suggestions, what Ati wanted it to be. So now Nvidia has to pay if they want to implement it. Don't ask me for proofs, since I have the same as those who say Nvidia guys pay developers to make Nvidia hardware faster. That is: NONE.
Posted on Reply
#38
magibeg
Whats with all this fighting. This should be a time of celebration when we have another fancy/expensive card coming out that we can buy. If nvidia thought they needed faster ram they would have done it. Engineers are not stupid after all. As for the whole dx10.1 thing, that sounds like a discussion for another thread, perhaps even in general nonsense for the huge amount of flaming and fanboyism.
Posted on Reply
#39
DrPepper
The Doctor is in the house
:toast: good idea man I hate getting wrapped up reading these posts and feeling I need to say something.
Posted on Reply
#41
largon
DarkMatterMaking SPs 50% more efficient and faster IS what I call IMPROVED architecture (...)
There's no reason to believe the SPs are more efficient. And infact, the exact quote is:
NVIDIA also promises that the unified shaders of both cards are to perform 50% faster than previous generation cards.
Faster would more than likely just mean they run at 1.5x the frequency of previous generation shaders.
Posted on Reply
#42
DarkMatter
largonThere's no reason to believe the SPs are more efficient. And infact, the exact quote is:
Faster would more than likely just mean they run at 1.5x the frequency of previous generation shaders.
I don't know where did I read it, but they said efficient.
Also in DailyTech at the OP link, they say:
NVIDIA documentation claims these second-generation unified shaders perform 50 percent better than the shaders found on the D9 cards released earlier this year.
You would just say "run 50% faster" and not "second-generation" and "perform 50% better" if that was the case. I'm not taking that as a fact. But IMO Nvidia and DailyTech are in the end saying more "efficient". In the other site that I said (and can't remember what is, I read 20+ tech sites each day) they used "efficient" word. If that ends up being true, that's another story.

EDIT: Also it's that I think it's a lot more probable that shaders are more "efficient" (i.e by adding another ALU, I don't know) than shaders running at 2400+ Mhz. The card is still 65nm, correct me if I'm wrong, but 2400Mhz is not going to to happen at 65nm on a reference design.
Posted on Reply
#43
Valdez
DarkMatterI can say the same taht I said with GDDR5 plus GDDR4 has proved to not be better than GDDR3. So why use it if it's not for marketing? GDDR3 is as good and it's cheaper, and so is that what you use. There's nothing like incompatibility, they could use it if they wanted, but I'm sure they would have to pay royalties for a performance gain that doesnt exist. Same for GDDR5 and DX10.1. People like to mention "conspiracy" theories about TWIMTBP, so I'm going to say one that I have been thinking of for some time about DX10.1 and why Nvidia doesn't want to implement it. There are many "hints" out there that suggest me that MS and Ati developed DX10.1 (even DX10) specifications together. And is very likely that Ati filled many patents about it's imlementation in hardware long before Nvidia even knew anything about how DX10 was going to be. As some have suggested DX10.1 is what DX10 was going to be before Nvidia did their suggestions, what Ati wanted it to be. So now Nvidia has to pay if they want to implement it. Don't ask me for proofs, since I have the same as those who say Nvidia guys pay developers to make Nvidia hardware faster. That is: NONE.
They would use gddr4 if they could! Just for marketing! (not on reference boards) Just like 2gb 9600gt, doesn't make any sense, but it sounds good-> people would buy it. 8800gt with 512 gddr4 sounds good -> people would buy it (higher is better - lot of people thinks).
But they can't use ddr4 because g80 doesn't support gddr4 (and gddr5). I can't explain myself better.

dx10(.1) specs were available to every manufacturer early, i don't think it was a secret in front of nvidia. Even S3 has a dx10.1 card.
Posted on Reply
#44
DarkMatter
ValdezThey would use gddr4 if they could! Just for marketing! (not on reference boards) Just like 2gb 9600gt, doesn't make any sense, but it sounds good-> people would buy it. 8800gt with 512 gddr4 sounds good -> people would buy it (higher is better - lot of people thinks).
But they can't use ddr4 because g80 doesn't support gddr4 (and gddr5). I can't explain myself better.

dx10(.1) specs were available to every manufacturer early, i don't think it was a secret in front of nvidia. Even S3 has a dx10.1 card.
You didn't understand me. G80/92 can't use GDDR4, so they can't use GDDR4 on the cards. Nvidia CAN!!! There's nothing special in implementing a new memory into the controler, they would do if it was good for them or necesary if you prefer to look at it like that.

About DX10.1 what exactly is "early"? I mean how much early in the scheme of things? I.e 2 months are too much. There are even hints that MS didn't gave Nvidia all the necesary to make their DX10 drivers run well, because they were pissed off with what happened with the Xbox GPU.
Posted on Reply
#45
Valdez
DarkMatterYou didn't understand me. G80/92 can't use GDDR4, so they can't use GDDR4 on the cards. Nvidia CAN!!! There's nothing special in implementing a new memory into the controler, they would do if it was good for them or necesary if you prefer to look at it like that.
So then why no gddr5 on the new cards?
Posted on Reply
#46
DarkMatter
ValdezSo then why no gddr5 on the new cards?
I have said it already. Availability and price. The price it would have have if both companies had to fight to get the few GDDR5 chips there are available.

If you are not convinced already, think about this: why is Ati's HD4850 going to have GDDR3 memory? Why not even GDDR4? Answers above.
Posted on Reply
#47
Valdez
DarkMatterI have said it already. Availability and price. The price it would have have if both companies had to fight to get the few GDDR5 chips there are available.

If you are not convinced already, think about this: why is Ati's HD4850 going to have GDDR3 memory? Why not even GDDR4? Answers above.
It is unlikely that the memory manufacturers prefer the smaller company over the market leading company. There is two logical answer for that: nvidia don't want ddr5 because their product doesn't support it, perhaps it is a bit harder to redesign the g80 memory controller than you think. Or it is cheaper to produce an 512bit card for nvidia than using a much faster memory, i don't know :)

Rumours says there will be a gddr5 version of the hd4850. The 0.8ns gddr4 doesn't make much sense in the light of 0.8ns gddr3, apart from the less power usage. The gddr3 has better latencies at the same clock.
Posted on Reply
#48
Morgoth
Fueled by Sapphire
sounds like hd2900xt...
Posted on Reply
#49
DarkMatter
ValdezIt is unlikely that the memory manufacturers prefer the smaller company over the market leading company. There is one logical answer for that: nvidia don't want ddr5 because their product doesn't support it, perhaps it is a bit harder to redesign the g80 memory controller than you think.

Rumours says there will be a gddr5 version of the hd4850. The 0.8ns gddr4 doesn't make much sense in the light of 0.8ns gddr3, apart from the less power usage. The gddr3 has better latencies at the same clock.
Memory manufacturers prefer money. That's all they want. They don't care who is buying their products as long as they pay and as long as they can sell ALL thier STOCK. Since they have low stock and they don't have high production right now, ANY company can buy that amount, so they would sell it to the one that paid more. Could Nvidia pay more than AMD? Maybe (well, sure), but why would they want to do so? It would make their cards more expensive, but what is worse for them is the REALLY LOW AVAILABILITY. Let's face it, Nvidia has a 66% of market share. That's twice of what Ati has. If availability is low for Ati, much more for Nvidia. Contrary to what people think, I don't think Nvidia cares too much about Ati and a lot more about their market audience. GDDR5 would make their product a lot more expensive and scarce. They don't want that. Plain and simple.

And HD4850 WON'T have a GDDR5 version from AMD. They gave partners the choice to use it. That way partners can decide if they want to pay the price premium or not. GDDR5 price is so high, that AMD has decided is not cost effective for HD4850. Now knowing that it's only an underclocked HD4870, think about GDDR5 and tell me in all honesty that it's not just a marketing strategy.
Posted on Reply
#50
largon
Morgothsounds like hd2900xt...
...with ~4x the performance.
Posted on Reply
Add your own comment
Jan 17th, 2025 17:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts