# Next-gen NVIDIA GeForce Specifications Unveiled



## malware (May 22, 2008)

After we already know what AMD/ATI are planning on their camp, it's NVIDIA's turn to show us what we should be prepared for. Verified by DailyTech, NVIDIA plans on refreshing its GPU line-up on June 18th with two new video cards that will feature the first CUDA-enabled graphics core, codenamed D10U. Two models are expected to be launched simultaneously, the flagship GeForce GTX 280 (D10U-30) and GeForce GTX 260 (D10U-20). The first chip will utilize 512-bit memory bus width, 240 stream processors (128 on the 9800 GTX) and support for up to 1GB memory. GTX 260 will be trimmed down version with 192 stream processors, 448-bit bus and up to 896MB graphics memory. Both cards will use the PCI-E version 2.0 interface, and will support NVIDIA's 3-way SLI technology. NVIDIA also promises that the unified shaders of both cards are to perform 50% faster than previous generation cards. Compared to the upcoming AMD Radeon 4000 series, the D10U GPU lacks of DirectX 10.1 support and is also limited to GDDR3 only memory. NVIDIA's documentation does not list an estimated street price for the new cards.

*View at TechPowerUp Main Site*


----------



## HaZe303 (May 22, 2008)

Sounds like im getting ATI card this time, to me the new ATI R700 cards sound much better on paper than the GT200?? I might be wrong, but sounds like the nV card is just a revolution of G92 and not a new gpu? I mean GDDR3 still, they could atleast go over to 4, preferably to 5 as ATI. And still no dx 10.1?? No im getting me a 4870 this summer, sounds like ill be getting it for cheap as well. Maybe finallly I can afford a Xfire system??


----------



## tkpenalty (May 22, 2008)

Malware, why did you ignore my links that had similar information posted (release date), sent to you half a month ago?


----------



## spud107 (May 22, 2008)

would have thought 10.1 would have been implemented, bit like having dx9.0b over dx9.0c?


----------



## largon (May 22, 2008)

As if RV770 was a "new GPU"... Whatever that means. The architecture of G80/G92 is superior than anything out there. Why on earth would you think nV should dump it? 
And DX10.1 is hardly worth mentioning, how many DX10.1 titles are there again?


----------



## malware (May 22, 2008)

tkpenalty said:


> Malware, why did you ignore my links that had similar information posted (release date), sent to you half a month ago?



The only recent PM I have from you is the one with the GIGABYTE Extreme motherboard?


----------



## Edito (May 22, 2008)

Maybe they just don't see any performance improve from GDDR4 over GDDR3 either i, look at the 8800GTS G92 has a spectaluar performance but still use GDDR3 when the time comes they will make a good use of it i believe cause i think ATI is using it but they are nothing using it well cause we just can't see any performance improvement... Don't get me wrong its what i think...


----------



## btarunr (May 22, 2008)

HaZe303 said:


> Sounds like im getting ATI card this time, to me the new ATI R700 cards sound much better on paper than the GT200??



That's always the case. We're drenched into amazing numbers such as "512bit", "1 GB GDDR4", "320 SP's".

No, I don't think the HD4870 can beat the GTX 280 in raw performance at least, maybe price, power and other factors.


----------



## spud107 (May 22, 2008)

there would probably be more if nv was using 10.1,




largon said:


> As if RV770 was a "new GPU"... Whatever that means. The architecture of G80/G92 is superior than anything out there. Why on earth would you think nV should dump it?
> And DX10.1 is hardly worth mentioning, how many DX10.1 titles are there again?


----------



## largon (May 22, 2008)

GDDR4 could have been a smart move as it is much more power efficient than GDDR3. Those 16/14 chips of GDDR3 on GTX280/GTX260 are going to suck stupid amounts of power, something like freaking 60-80W for the GDDR3 alone... 

65nm - instead of 55nm - is another problem and causes more unnecessary power consumption. 

And yet again, nV fails in creating a practical PCB layout. The board used for GTX280/260 is pure horror.


----------



## kylew (May 22, 2008)

largon said:


> As if RV770 was a "new GPU"... Whatever that means. The architecture of G80/G92 is superior than anything out there. Why on earth would you think nV should dump it?
> And DX10.1 is hardly worth mentioning, how many DX10.1 titles are there again?



Well, you KNOW why there's very little DX10.1 implementation, look at Assassin's Creed, NV moaned, stamped their feet, and so on to get it removed. Dx10.1 is "insignificant" because NV want it to be. In reality, NV can't implement it whereas DX10.1 on the 3800s shows massive performance gains when enabled.


----------



## Animalpak (May 22, 2008)

btarunr said:


> That's always the case. We're drenched into amazing numbers such as "512bit", "1 GB GDDR4", "320 SP's".
> 
> *No, I don't think the HD4870 can beat the GTX 280 in raw performance at least, maybe price, power and other factors*.






I agree 

GT200 rocks !


----------



## JAKra (May 22, 2008)

*Dx10.1 Upgrade?*

Hi!

I have one question. If DX10.1 can be removed by a patch, does it mean that it works the other way around? Like upgrade Crysis to DX10.1? Or any other DX10 title.
That would be nice, and I presume not to hard to accomplish(technically).


----------



## Animalpak (May 22, 2008)

HaZe303 said:


> Sounds like im getting ATI card this time, to me the new ATI R700 cards sound much better on paper than the GT200?? I might be wrong, but sounds like the nV card is just a revolution of G92 and not a new gpu? I mean GDDR3 still, they could atleast go over to 4, preferably to 5 as ATI. And still no dx 10.1?? No im getting me a 4870 this summer, sounds like ill be getting it for cheap as well. Maybe finallly I can afford a Xfire system??



Completley wrong. 


GT200 is a FULL new GPU, and the GDDR3 works alright better than the GDDR5.At the end you get the same results but the GDDR3s they are more exploitable.


The differences betwheen DX10 and DX10.1 are least ! The games have just begun to use the DX10s and they are little of it !!


----------



## Valdez (May 22, 2008)

JAKra said:


> Hi!
> 
> I have one question. If DX10.1 can be removed by a patch, does it mean that it works the other way around? Like upgrade Crysis to DX10.1? Or any other DX10 title.
> That would be nice, and I presume not to hard to accomplish(technically).



nvidia owns crytek now, so there will be no dx10.1 support (crytek officialy confirmed this).


----------



## FilipM (May 22, 2008)

It looks great on paper, but, how will the wallet look like when you buy one of these?

Anyone knows the price or has a hint?


----------



## largon (May 22, 2008)

*JAKra*,
Unlikely, but really, anything's possible but ofcourse it's way easier to _cut_ rather than _add_ something. 



kylew said:


> (...) NV can't implement it whereas DX10.1 on the 3800s shows massive performance gains when enabled.


DX10.1 in AC allows performance boost *when AA is used*. Sure. 
But then again, it also causes incompatibility with nV GPUs that only support DX10. 

Choose now, which would you fix?





Valdez said:


> nvidia owns crytek now, so there will be no dx10.1 support (crytek officialy confirmed this).


Link please.


----------



## Valdez (May 22, 2008)

Animalpak said:


> Completley wrong.
> 
> 
> GT200 is a FULL new GPU, and the GDDR3 works alright better than the GDDR5.At the end you get the same results but the GDDR3s they are more exploitable.
> ...



The gt200 is not new it's just an improved g80. The memory controller in g80 is not flexible, so they have to use gddr3 in gt200 too.


----------



## Valdez (May 22, 2008)

largon said:


> *JAKra*,
> Unlikely, but really, anything's possible but ofcourse it's way easier to _cut_ rather than _add_ something.
> 
> 
> ...



it's not incompatible, just when vista sp1 installed, nv gpus don't use dx10.1 features, but there is no incompatibility.


----------



## Animalpak (May 22, 2008)

Valdez said:


> The gt200 is not new it's just an improved g80. The memory controller in g80 is not flexible, so they have to use gddr3 in gt200 too.




Sure? Then when a new GPU will go out? They had everybody confirmed that it was new !! 

DAMN mad:


----------



## Valdez (May 22, 2008)

Animalpak said:


> Sure? Then when a new GPU will go out? They had everybody confirmed that it was new !!
> 
> DAMN mad:



Don't be sad, the gt200 will be the fastest gpu ever released, 9900gtx will be a brutal card, much more faster than 8800ultra/9800gtx


----------



## Exavier (May 22, 2008)

I very much doubt it's a new *g80* as the most recent cards are g92..

I would also discourage the fanboy attitudes already emerging in this thread...get whichever is best, they're both unreleased yet..

also, this comes out on my birthday
mega lol


----------



## largon (May 22, 2008)

*Exavier*,
G200 is evolved G92 which is evolved G80. So it's more like a "new G80" as it's targeted for ultra high-end rather than performance-sector as G92. 


Valdez said:


> it's not incompatible, just when vista sp1 installed, nv gpus don't use dx10.1 features, but there is no incompatibility.


Well obviously nV chips _are_ incompatible with Ubisoft's DX10.1 as removing that removes problems with nV GPUs.


----------



## Valdez (May 22, 2008)

Exavier said:


> I very much doubt it's a new *g80* as the most recent cards are g92..



g92 is just a revised g80, as rv670 is a revised r600, and rv770 is an improved rv670.


----------



## Valdez (May 22, 2008)

largon said:


> Well obviously nV chips _are_ incompatible with Ubisoft's DX10.1 as removing that removes problems with nV GPUs.



http://hardocp.com/article.html?art=MTQ5MywxLCxoZW50aHVzaWFzdA==

when sp1 installed nvidia cards perform the same as no sp1 installed. There is no incompatibility, they run fine with dx10.1, but don't use it's features. So they run in dx10 mode even if dx10.1 isntalled.


----------



## largon (May 22, 2008)

I'm talking about the DX10.1 implementation in the game, not in SP1. DX10.1 code in AC _causes_ problems with nV GPUs. That's why it was removed _by Ubisoft_.


----------



## HTC (May 22, 2008)

IMHO, not having Dx10.1 is a shot in the foot, in the long run.

Sure, right now there aren't many games for it but that will change and when it does, ATI will be prepared but nVidia won't.


----------



## PVTCaboose1337 (May 22, 2008)

Caboose senses some fail.  No DX 10.1...  fail.  GDDR3?  Fail...


----------



## farlex85 (May 22, 2008)

I wouldn't say fail. Nvidia is getting a 512-bit bus, Ati is going for gddr5 memory. Both expensive ways to increase bandwidth, and both will be great. Whats strange to me is nvidia keeps making strange memory amounts. 896mb of memory? Odd, I'm sure the math works out though.

And who cares about 10.1. We still don't have a native dx10 game, and the improvements for 10.1 I'm sure will be minor. I seem to remember a thread here where everyone seemed to think there wasn't much difference between dx9 and dx10. And now everyone's complaining about 10.1. Methinks some would rather find the bad and complain than the good and rejoice.:shadedshu


----------



## Valdez (May 22, 2008)

largon said:


> I'm talking about the DX10.1 implementation in the game, not in SP1. DX10.1 code in AC _causes_ problems with nV GPUs. That's why it was removed _by Ubisoft_.



I never heard about it. I'm sure this thing would be mentioned in the hardocp article, but it's not, so i don't think it's true.


----------



## Animalpak (May 22, 2008)

Guys i found new pics this is how he looks the GTX280


----------



## DarkMatter (May 22, 2008)

Valdez said:


> The gt200 is not new it's just an improved g80. The memory controller in g80 is not flexible, so they have to use gddr3 in gt200 too.



It's because GDDR5 supply won't be enough for both companies. It's not even enough for Ati, indeed they droped it from HD4850 AND reduced HD4870's frame buffer to 512 because of this same thing. If Nvidia tried to fight to get GDDR5 too, prices would go up >>> worse for consumers.


----------



## DarkMatter (May 22, 2008)

GT200 is as much a "new chip" as RV770 is. There's nothing new in RV770 that there isn't in RV670 besides GDDR5 support. And that means nothing, it's just e-penis and marketing.

Indeed, if what has been said about the Shader Processors is true, GT200 is more "new" or "advanced/improved" relative to G92 than RV670 to 770. Making SPs 50% more efficient and faster IS what I call IMPROVED architecture and not adding a GDDR5 memory support that is not going to be used anyway. I could say the same about 512 memory interface though.

What is that has improved so much otherwise? SPs running faster than the core? 50% more of them? Double the TMUs?

No, time for a reality check, guys. There's no innovation in any of the new chips.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> It's because GDDR5 supply won't be enough for both companies. It's not even enough for Ati, indeed they droped it from HD4850 AND reduced HD4870's frame buffer to 512 because of this same thing. If Nvidia tried to fight to get GDDR5 too, prices would go up >>> worse for consumers.



I don't think so. If g80/g92/gt200 memcontroller could use ddr5 it's obviously it could use gddr4. But we didn't see g80 or g92 with gddr4. (i know gddr4 isn't faster than gddr3, but if g80 could use ddr4 we would've seen that already, just like the 2gb 9600gt, it doesn't make any sense, but many people don't know that. It's just marketing.)

(i know my English a bit crap, but i hope you'll understand what i wrote)


----------



## largon (May 22, 2008)

Valdez said:


> I never heard about it. I'm sure this thing would be mentioned in the hardocp article, but it's not, so i don't think it's true.





> In the beginning, everything looked perfect. The DX10.1 API included in Assassin’s Creed enabled Anti-Aliasing in a single pass, which allowed ATI Radeon HD 3000 hardware (which supports DX10.1) to flaunt a competitive advantage over Nvidia (which support only DX10.0). But Assassin's Creed had problems. We noticed various reports citing stability issues such as widescreen scaling, camera loops and crashes - mostly on Nvidia hardware.
> 
> (...)
> 
> ...


http://www.tgdaily.com/content/view/37326/98/


----------



## CDdude55 (May 22, 2008)

Might have to pick me up one of those new nvidia cards. First i need my stimulus check come. I also don't care if my CPU bottlenecks it. I will still the the raw performance.


----------



## DrPepper (May 22, 2008)

Animalpak said:


> Completley wrong.
> 
> 
> GT200 is a FULL new GPU, and the GDDR3 works alright better than the GDDR5.At the end you get the same results but the GDDR3s they are more exploitable.
> ...



He said it SOUNDS better on paper, How do you know GDDR3 works better than GDDR5 when it hasn't been implemented yet on a card, also if GDDR3 is not better than GDDR4 would it not be right to assume that GDDR3 isn't better than GDDR2 and because all games don't use 10.1 doesn't mean Nvidia shouldn't be innovative and implement it because soon all games will adopt it like directX 9.0c.


----------



## DarkMatter (May 22, 2008)

Valdez said:


> I don't think so. If g80/g92/gt200 memcontroller could use ddr5 it's obviously it could use gddr4. But we didn't see g80 or g92 with gddr4. (i know gddr4 isn't faster than gddr3, but if g80 could use ddr4 we would've seen that already, just like the 2gb 9600gt, it doesn't make any sense, but many people don't know that. It's just marketing.)
> 
> (i know my English a bit crap, but i hope you'll understand what i wrote)



I can say the same taht I said with GDDR5 plus GDDR4 has proved to not be better than GDDR3. So why use it if it's not for marketing? GDDR3 is as good and it's cheaper, and so is that what you use. There's nothing like incompatibility, they could use it if they wanted, but I'm sure they would have to pay royalties for a performance gain that doesnt exist. Same for GDDR5 and DX10.1. People like to mention "conspiracy" theories about TWIMTBP, so I'm going to say one that I have been thinking of for some time about DX10.1 and why Nvidia doesn't want to implement it. There are many "hints" out there that suggest me that MS and Ati developed DX10.1 (even DX10) specifications together. And is very likely that Ati filled many patents about it's imlementation in hardware long before Nvidia even knew anything about how DX10 was going to be. As some have suggested DX10.1 is what DX10 was going to be before Nvidia did their suggestions, what Ati wanted it to be. So now Nvidia has to pay if they want to implement it. Don't ask me for proofs, since I have the same as those who say Nvidia guys pay developers to make Nvidia hardware faster. That is: NONE.


----------



## magibeg (May 22, 2008)

Whats with all this fighting. This should be a time of celebration when we have another fancy/expensive card coming out that we can buy. If nvidia thought they needed faster ram they would have done it. Engineers are not stupid after all. As for the whole dx10.1 thing, that sounds like a discussion for another thread, perhaps even in general nonsense for the huge amount of flaming and fanboyism.


----------



## DrPepper (May 22, 2008)

good idea man I hate getting wrapped up reading these posts and feeling I need to say something.


----------



## Valdez (May 22, 2008)

largon said:


> http://www.tgdaily.com/content/view/37326/98/



I don't think that the dx10.1 code and the crashes related each other. I read the ubi forums, and the crashes still exist after the patch.


----------



## largon (May 22, 2008)

DarkMatter said:


> Making SPs 50% more efficient and faster IS what I call IMPROVED architecture (...)


There's no reason to believe the SPs are more _efficient_. And infact, the exact quote is:


> NVIDIA also promises that the unified shaders of both cards are to perform 50% _faster_ than previous generation cards.


Faster would more than likely just mean they run at 1.5x the frequency of previous generation shaders.


----------



## DarkMatter (May 22, 2008)

largon said:


> There's no reason to believe the SPs are more _efficient_. And infact, the exact quote is:
> Faster would more than likely just mean they run at 1.5x the frequency of previous generation shaders.



I don't know where did I read it, but they said efficient.
Also in DailyTech at the OP link, they say:



> NVIDIA documentation claims these second-generation unified shaders perform 50 percent better than the shaders found on the D9 cards released earlier this year.



You would just say "run 50% faster" and not "second-generation" and "*perform* 50% better" if that was the case. I'm not taking that as a fact. But IMO Nvidia and DailyTech are in the end saying more "efficient". In the other site that I said (and can't remember what is, I read 20+ tech sites each day) they used "efficient" word. If that ends up being true, that's another story.

EDIT: Also it's that I think it's a lot more probable that shaders are more "efficient" (i.e by adding another ALU, I don't know) than shaders running at 2400+ Mhz. The card is still 65nm, correct me if I'm wrong, but 2400Mhz is not going to to happen at 65nm on a reference design.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> I can say the same taht I said with GDDR5 plus GDDR4 has proved to not be better than GDDR3. So why use it if it's not for marketing? GDDR3 is as good and it's cheaper, and so is that what you use. There's nothing like incompatibility, they could use it if they wanted, but I'm sure they would have to pay royalties for a performance gain that doesnt exist. Same for GDDR5 and DX10.1. People like to mention "conspiracy" theories about TWIMTBP, so I'm going to say one that I have been thinking of for some time about DX10.1 and why Nvidia doesn't want to implement it. There are many "hints" out there that suggest me that MS and Ati developed DX10.1 (even DX10) specifications together. And is very likely that Ati filled many patents about it's imlementation in hardware long before Nvidia even knew anything about how DX10 was going to be. As some have suggested DX10.1 is what DX10 was going to be before Nvidia did their suggestions, what Ati wanted it to be. So now Nvidia has to pay if they want to implement it. Don't ask me for proofs, since I have the same as those who say Nvidia guys pay developers to make Nvidia hardware faster. That is: NONE.




They would use gddr4 if they could! Just for marketing! (not on reference boards) Just like 2gb 9600gt, doesn't make any sense, but it sounds good-> people would buy it. 8800gt with 512 gddr4 sounds good -> people would buy it (higher is better - lot of people thinks).
But they can't use ddr4 because g80 doesn't support gddr4 (and gddr5). I can't explain myself better.

dx10(.1) specs were available to every manufacturer early, i don't think it was a secret in front of nvidia. Even S3 has a dx10.1 card.


----------



## DarkMatter (May 22, 2008)

Valdez said:


> They would use gddr4 if they could! Just for marketing! (not on reference boards) Just like 2gb 9600gt, doesn't make any sense, but it sounds good-> people would buy it. 8800gt with 512 gddr4 sounds good -> people would buy it (higher is better - lot of people thinks).
> But they can't use ddr4 because g80 doesn't support gddr4 (and gddr5). I can't explain myself better.
> 
> dx10(.1) specs were available to every manufacturer early, i don't think it was a secret in front of nvidia. Even S3 has a dx10.1 card.



You didn't understand me. G80/92 can't use GDDR4, so they can't use GDDR4 on the cards. Nvidia CAN!!! There's nothing special in implementing a new memory into the controler, they would do if it was good for them or necesary if you prefer to look at it like that.

About DX10.1 what exactly is "early"? I mean how much early in the scheme of things? I.e 2 months are too much. There are even hints that MS didn't gave Nvidia all the necesary to make their DX10 drivers run well, because they were pissed off with what happened with the Xbox GPU.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> You didn't understand me. G80/92 can't use GDDR4, so they can't use GDDR4 on the cards. Nvidia CAN!!! There's nothing special in implementing a new memory into the controler, they would do if it was good for them or necesary if you prefer to look at it like that.



So then why no gddr5 on the new cards?


----------



## DarkMatter (May 22, 2008)

Valdez said:


> So then why no gddr5 on the new cards?



I have said it already. Availability and price. The price it would have have if both companies had to fight to get the few GDDR5 chips there are available.

If you are not convinced already, think about this: why is Ati's HD4850 going to have GDDR3 memory? Why not even GDDR4? Answers above.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> I have said it already. Availability and price. The price it would have have if both companies had to fight to get the few GDDR5 chips there are available.
> 
> If you are not convinced already, think about this: why is Ati's HD4850 going to have GDDR3 memory? Why not even GDDR4? Answers above.



It is unlikely that the memory manufacturers prefer the smaller company over the market leading company. There is two logical answer for that: nvidia don't want ddr5 because their product doesn't support it, perhaps it is a bit harder to redesign the g80 memory controller than you think. Or it is cheaper to produce an 512bit card for nvidia than using a much faster memory, i don't know 

Rumours says there will be a gddr5 version of the hd4850. The 0.8ns gddr4 doesn't make much sense in the light of 0.8ns gddr3, apart from the less power usage. The gddr3 has better latencies at the same clock.


----------



## Morgoth (May 22, 2008)

sounds like hd2900xt...


----------



## DarkMatter (May 22, 2008)

Valdez said:


> It is unlikely that the memory manufacturers prefer the smaller company over the market leading company. There is one logical answer for that: nvidia don't want ddr5 because their product doesn't support it, perhaps it is a bit harder to redesign the g80 memory controller than you think.
> 
> Rumours says there will be a gddr5 version of the hd4850. The 0.8ns gddr4 doesn't make much sense in the light of 0.8ns gddr3, apart from the less power usage. The gddr3 has better latencies at the same clock.



Memory manufacturers prefer money. That's all they want. They don't care who is buying their products as long as they pay and as long as they can sell ALL thier STOCK. Since they have low stock and they don't have high production right now, ANY company can buy that amount, so they would sell it to the one that paid more. Could Nvidia pay more than AMD? Maybe (well, sure), but why would they want to do so? It would make their cards more expensive, but what is worse for them is the REALLY LOW AVAILABILITY. Let's face it, Nvidia has a 66% of market share. That's twice of what Ati has. If availability is low for Ati, much more for Nvidia. Contrary to what people think, I don't think Nvidia cares too much about Ati and a lot more about their market audience. GDDR5 would make their product a lot more expensive and scarce. They don't want that. Plain and simple.

And HD4850 WON'T have a GDDR5 version from AMD. They gave partners the choice to use it. That way partners can decide if they want to pay the price premium or not. GDDR5 price is so high, that AMD has decided is not cost effective for HD4850. Now knowing that it's only an underclocked HD4870, think about GDDR5 and tell me in all honesty that it's not just a marketing strategy.


----------



## largon (May 22, 2008)

Morgoth said:


> sounds like hd2900xt...


...with ~4x the performance.


----------



## Valdez (May 22, 2008)

DarkMatter said:


> Memory manufacturers prefer money. That's all they want. They don't care who is buying their products as long as they pay and as long as they can sell ALL thier STOCK. Since they have low stock and they don't have high production right now, ANY company can buy that amount, so they would sell it to the one that paid more. Could Nvidia pay more than AMD? Maybe (well, sure), but why would they want to do so? It would make their cards more expensive, but what is worse for them is the REALLY LOW AVAILABILITY. Let's face it, Nvidia has a 66% of market share. That's twice of what Ati has. If availability is low for Ati, much more for Nvidia. Contrary to what people think, I don't think Nvidia cares too much about Ati and a lot more about their market audience. GDDR5 would make their product a lot more expensive and scarce. They don't want that. Plain and simple.
> 
> And HD4850 WON'T have a GDDR5 version from AMD. They gave partners the choice to use it. That way partners can decide if they want to pay the price premium or not. GDDR5 price is so high, that AMD has decided is not cost effective for HD4850. Now knowing that it's only an underclocked HD4870, think about GDDR5 and tell me in all honesty that it's not just a marketing strategy.



Maybe you're right. But samsung and hynix are going to produce gddr5 too, not only qimonda. I would bet, we will not see a nv card with ddr5 later  I don't think that the gddr5 is just a marketing strategy (gddr4 was this), but we will see when there will be benchmarks on the net 

Meanwhile i edited my previous post


----------



## DarkMatter (May 22, 2008)

Valdez said:


> Maybe you're right. But samsung and hynix are going to produce gddr5 too, not only qimonda. I would bet, we will not see a nv card with ddr5 later  I don't think that the gddr5 is just a marketing strategy (gddr4 was this), but we will see when there will be benchmarks on the net
> 
> Meanwhile i edited my previous post



But availability NOW is low, and they have to release NOW. In the near future I don't know. They have already said there's going to be a 55nm refresh soon and they could add GDDR5 support then, once availability is better. 
I know that's something that's going to piss off some people, as if the fact that a 55nm version comes out would make their cards worse, but it's going to happen. People will buy >> people will enjoy >> new 55 nm card will launch >> people will complain "why they didn't release 55 nm in the first place? ****ng nvidiots". Even though they already know it will happen before launch...


----------



## Darkrealms (May 22, 2008)

Nvidia sounds like they are getting comfortable where they are as far as designs go.  I don't know about the availability of GDDR5 but I do remember the performance increase of GDDR4 wasn't that much better than GDDR3 so Nvidia may not even see it as worth it untill GDDR5 is standard/common.

Anyone ever think that Nvidia may never go to DX10.1,  there are a lot of companys these days that don't like/want to work with MS.  2¢  But I think some of the industry is trying to get away from MS controlled graphics.


----------



## Assimilator (May 22, 2008)

IMO, by the time we see games that require DirectX 10.1, the D10 GPU will be ancient history.


----------



## Urbklr (May 22, 2008)

So, i don't get why people are saying the R770 is a just a beefed up R670...

The R7xx was in development before the R600 was even released, AMD said they were taking all there focus to the R7xx. The R770 is all new....AMD has confirmed the above.

And the GT200 is also all-new, the cards both look amazing on paper. Just like the G80 and R600 did. Remember how many people thought the R600 was gonna lay down the law, when they saw the specs? This is no different, the specs look much better, just as the R600 looked better on paper....that doesn't always transfer to real-world performance. All we can do is wait and see.

PS: "R770/GT200 rulezzz!!"....is just 97% fanyboy crap...


----------



## Haytch (May 22, 2008)

Latency Vs Bandwith ?  Its a wait and see story.
I have to purchase 2 x 4870x2's because i decided that a single 3870x2 would do the job in the ATi system i have. That wont stop me from upgrading my Nv system.  I wouldnt mind playing with the CUDA on the EN8800GTX before i throw the card away. 

I look forward to the GDDR5 bandwith to be utillized efficiently by AMD/ATi because its the way of the future! And suspect the reason Nvidia havnt moved onto GDDR5 is due to; 
*  Cheaper ram modules for a well aged technology with better latency, hoping to keep price competative with AMD/ATi's cards.
*  To allow themselves to make as much money as possible off GDDR3 technology now that they got CUDA working before the public designs crazy C based software for the rest of us that might give them a greater advantage in sales the next round of releases.

Either way, im looking at big bucks, we all are. . .


----------



## MrMilli (May 23, 2008)

Are the GT200 and R700 new gpu's or not?
Well the basic designs aren't. The actual gpu's are new ofcourse.

History:
R300 => R420 => R520
ATI used the basic R300 design from august 2002 until R600 was released (may 2007 but should have been on the market 6 months earlier without delay).

NV40 => G70
nVidia used NV40 technology from april 2004 until november 2006.

So it's quiet common to use certain technology for a couple generations. This will be even more profound with current generation of gpu's because of the increased complexity of unified shaders.
It takes 2 to 3 years to design a gpu like the R300, NV40, R600 or G80. After that you get the usual updates. Even a process shrink, let's say 65nm to 45nm, takes almost a year without even touching the design. These companies manage to hide this time because they have multiple design teams working in parallel.
The same thing happens with cpu's. Look at K8 and K10. Look at Conroe and Penryn.
Expect really new architectures from ATI and nVidia somewhere in 2009, maybe even later and they will be DX11.


----------



## warhammer (May 23, 2008)

Wich ever new card ATI or Nvidia can crack the 100+ FPS in CRYSIS maxred out will be the winner.
The price is going to be the KILLER.


----------



## Rebo&Zooty (May 23, 2008)

MrMilli said:


> Are the GT200 and R700 new gpu's or not?
> Well the basic designs aren't. The actual gpu's are new ofcourse.
> 
> History:
> ...



acctualy the conroe was based on core that was based on pentium-m that was based on p3, if u check some cpu id apps the core2 chips come up as pentium3 multi core chips, imagin if they had stuck with p3 insted of moving to pentium4..........


----------



## sethk (May 23, 2008)

The combination of GDDR5 and 512bit would have been too much for the consumer to bear, cost wise. There's plenty of GDDR3, and with twice the bandwidth no need to clock the memory particularly high. Think about it, who's going to be supply constrained?

Once (if?) GDDR5 is plentiful, Nvidia will come out with a lower cost redesign that's GDDR5 and 256bit or some odd bit depth like 320 or 384. Just like G92 was able to take the much more expensive G80 design and get equivalent performance at 256bit, we will see something similar for the GT200. In the meanwhile make no mistake, this is the true successor the G80, going for the high end crown.

I'm sure we'll also see the GX2 version of this before year's end.


----------



## bill_d (May 23, 2008)

how you going to get 500+ watts to a 280gx2


----------



## [I.R.A]_FBi (May 23, 2008)

its own built in psu maybe?


----------



## btarunr (May 23, 2008)

bill_d said:


> how you going to get 500+ watts to a 280gx2



NVidia never made a dual-GPU card using G80, ATI never made one using R600 either. I think there will be a tone-down GPU derived from GT200 that will make it to the next dual-GPU card from NV. By 'tone-down' I'm refering to what G92 and RV670 were to G80 and R600.


----------



## bill_d (May 23, 2008)

maybe, but i think till they move to 55nm and if they get power savings like ati did from 2900 to 3870 you won't see a new gx2


----------



## lemonadesoda (May 23, 2008)

HD.4870.3Dmark06=21,223 vs. nVidia GT200.3Dmark06benchmark.leak.html=28,337 .  It's definitely MUCH faster than you were expecting 

ROFL WARNING


----------



## Rebo&Zooty (May 23, 2008)

damn u lemonadesoda, you stole my bit, i been using that 2nd link for weeks/months now, well versions of it.........


----------



## DarkMatter (May 23, 2008)

Rebo&Zooty said:


> damn u lemonadesoda, you stole my bit, i been using that 2nd link for weeks/months now, well versions of it.........



and damn you Rebo, I felt the urge to click on that second link after your post, even though I didn't want to do so. 

It's been removed BTW.



sethk said:


> The combination of GDDR5 and 512bit would have been too much for the consumer to bear, cost wise. There's plenty of GDDR3, and with twice the bandwidth no need to clock the memory particularly high. Think about it, who's going to be supply constrained?
> 
> Once (if?) GDDR5 is plentiful, Nvidia will come out with a lower cost redesign that's GDDR5 and 256bit or some odd bit depth like 320 or 384. Just like G92 was able to take the much more expensive G80 design and get equivalent performance at 256bit, we will see something similar for the GT200. In the meanwhile make no mistake, this is the true successor the G80, going for the high end crown.
> 
> I'm sure we'll also see the GX2 version of this before year's end.



Exactly what I was saying. For Nvidia supply is a very important thing. 8800 GT was an exception in a long history of delibering plenty of cards at launch. Paper launch is Ati's bussines, not Nvidia's, don't forget this guys.


----------



## Rebo&Zooty (May 23, 2008)

whats been removed, both links work perfectly for me


----------



## Rebo&Zooty (May 23, 2008)

here i fixed it 

nVidia GT200.3Dmark06benchmark.leak.html=28,337


----------



## laszlo (May 23, 2008)

is normal that Amd-Ati will use DDR5 and Nvidia not mainly because Ati has promoted and invested in the research and production  and all ram manufacturers will serve 1st ati with the new tech. and this 2 step distance will remain between them i think


----------



## InnocentCriminal (May 23, 2008)

It's interesting to see that the 280 will be using a 512Mbit memory bus, that alone should help on the performance. ATi should have implemented it in the 4870(X2).


----------



## MrMilli (May 23, 2008)

Rebo&Zooty said:


> acctualy the conroe was based on core that was based on pentium-m that was based on p3, if u check some cpu id apps the core2 chips come up as pentium3 multi core chips, imagin if they had stuck with p3 insted of moving to pentium4..........



Well i didn't want to go into details of cpu's. I'm just making a reference.


----------



## DarkMatter (May 23, 2008)

Rebo&Zooty said:


> imagin if they had stuck with p3 insted of moving to pentium4..........



That's something that has been debated a lot. IMO P4 was a good decision at that time, but it stayed too long. P3 had reached a wall and P4 was the only way they saw to overcome this. It's like a jam on the highway, sometimes your lane does not move forward and you find that the next one does, so you change lanes. A bit later your lane stops and you see that your previous lane is moving faster, but you can not change right away again. At the end you always come back, but the question remains whether you would have advanced more if you had stayed in it in the first place. Usually if you're smart and lucky enough, you advance more changing lanes. It didn't work for Intel, or yes, actually there is no way of knowing.


----------



## btarunr (May 23, 2008)

Right, it's back to discussing about two companies that are pregnant and of whose baby is better. Like babies, things look better _after_ birth, scans and graphs are always blurry.


----------



## farlex85 (May 23, 2008)

btarunr said:


> Right, it's back to discussing about two companies that are pregnant and of whose baby is better. Like babies, things look better _after_ birth, scans and graphs are always blurry.



 Probably the best analogy of the situation I've heard.


----------



## DarkMatter (May 23, 2008)

btarunr said:


> Right, it's back to discussing about two companies that are pregnant and of whose baby is better. Like babies, things look better _after_ birth, scans and graphs are always blurry.



Yup, but babies are usually pretty similar to their brothers if they come from the same father.


----------



## InnocentCriminal (May 23, 2008)

Can't use the tech porn sentence after all this talk of babies...

... Pictures, more information?


----------



## btarunr (May 23, 2008)

Bah, would NVidia lose millions if they let people in on their upcoming products and release a proper picture? Why are they acting so _sissy_?


----------



## Regman12 (May 23, 2008)

*Tesla C870 Gpu*

Can the new cards be based on this, tesla C870 Specs,  Price 1100. plus,  Online at Nvdiva about $680. US


----------



## btarunr (May 24, 2008)

Regman12 said:


> Can the new cards be based on this, tesla C870 Specs,  Price 1100. plus,  Online at Nvdiva about $680. US



It's the other way round. C870 is a derivative of G80.


----------



## Haytch (May 24, 2008)

btarunr said:


> Bah, would NVidia lose millions if they let people in on their upcoming products and release a proper picture? Why are they acting so _sissy_?



Yep, something like that could mean the end of a more dominant company in graphics card market. Seems like Nvidia is struggling or holding back, either way, to reveal their plans would compromise the advantage they currently have and allow competitors to take advantage.


----------



## btarunr (May 24, 2008)

Haytch said:


> Yep, something like that could mean the end of a more dominant company in graphics card market. Seems like Nvidia is struggling or holding back, either way, to reveal their plans would compromise the advantage they currently have and allow competitors to take advantage.



What advantage? We already know what each company has in store for us, what's the big deal in letting out proper pictures? It's like big automobile companies showcase their upcoming products months before release and everyone knows what car is about to come out.

I'm talking about pictures and of how certain tech-websites are being bitchy in letting out only tiny/monochrome pics of the card and PCB, as if when it comes out everyone's going to be "ZOMG Buyy eettt!!!1" no, everyone knows it's a high-end thing and only a minority of the buyers would plan to buy it.


----------



## sethk (May 25, 2008)

Realistically, companies make last minute changes to clock speeds through BIOS updates and adjust pricing to provide a compelling price / performance situation for each card, especially when the competing launches are so close to each other.

It's sad, but both companies hope to have a *really* compelling advantage in price / performance during the first review phase (which is done right before launch) because these are the reviews read by a majority of early adopters, and whatever you do afterwards to adjust, that first judgement has a huge impact on sales.

So why hold your cards so close to your chest? It's because you are afraid your competitor will somehow use this information to adjust their launch to make their competing card more ... competitive. If there was no immediate competition and you were just competing against your own old cards, then the hype would flow well in advance.


----------



## newconroer (May 26, 2008)

MrMilli said:


> Are the GT200 and R700 new gpu's or not?
> Well the basic designs aren't. The actual gpu's are new ofcourse.
> 
> History:
> ...




Exactly! People seem to forget that the G80 was revolutionary in it's architecture. Then comes along the G92 and they are up in arms that it's not AS revolutionary as the G80. Well wth did they expect, it was only one year later. 

For the majority of games on the market, a single top end (and very affordable now) Nvidia card, will lay waste. Why must we see the introduction of a brand new architecture? So we can load up Fraps and jerk-off to the sight of a solid 150fps 24/7? 

Everything Nvidia(and in some ways, ATi) has done since the G80, has been purposeful it seems -  The GT, the new GTS, the 9800 series - They were simply tuning, tweaking and slight upgrading their product line up, to ensure two things : Market dominance, and 'keeping with the times/a.k.a. keeping their generic consumer base.' 

They accomplished both with ease. Now nearly two years after G80, we see a new card to emerge that's apparently going to be fairly impressive. Even if it's not a new architecture, 

                                             W H O   C  A  R  E  S !?!?!?


----------



## graphicsHorse (May 26, 2008)

Even S3 has implemented DX10.1

And yes it's an important feature that improves AA performance drastically if implemented.
Sooner or later (that is within 6 months) every DX10 game will have this 10.1 patch.
So the G200 series won't be lasting as long as the 8800 series did.

Read this: (+ reader comments below the article too)
http://www.theinquirer.net/gb/inquirer/news/2007/11/16/why-dx10-matters

"DX10.1 brings the ability to read those sub-samples to the party via MSBRW. To the end user, this means that once DX10.1 hits, you can click the AA button on your shiny new game and have it actually do something. This is hugely important."


----------



## HTC (Jun 4, 2008)

*Found this in a Portuguese site ...*









> Geforce GTX 280 runs fast Physics
> 
> Cool games in late 2008
> 
> ...





> The new graphics demo running on the gtx 260
> 
> 
> http://uk.youtube.com/watch?v=K9gwJwCNvT8



Source: PCDig@


----------



## CDdude55 (Jun 5, 2008)

Anyone know the estamted price to the Geforce GTX 260? If its to much i will just buy a 8 series card then.


----------

